Test Report: KVM_Linux_crio 21772

                    
                      efb80dd6659b26178e36f8b49f3cb836e30a0156:2025-10-19:41980
                    
                

Test fail (3/324)

Order failed test Duration
37 TestAddons/parallel/Ingress 158.48
244 TestPreload 166.68
287 TestPause/serial/SecondStartNoReconfiguration 376.56
x
+
TestAddons/parallel/Ingress (158.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-360741 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-360741 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-360741 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [4df20794-dce6-4998-a971-227e36294dea] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [4df20794-dce6-4998-a971-227e36294dea] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.005626534s
I1019 12:10:40.509615  148701 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-360741 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-360741 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.521401808s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-360741 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-360741 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.35
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-360741 -n addons-360741
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-360741 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-360741 logs -n 25: (1.24208813s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-854705                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-854705 │ jenkins │ v1.37.0 │ 19 Oct 25 12:06 UTC │ 19 Oct 25 12:06 UTC │
	│ start   │ --download-only -p binary-mirror-477717 --alsologtostderr --binary-mirror http://127.0.0.1:39483 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-477717 │ jenkins │ v1.37.0 │ 19 Oct 25 12:06 UTC │                     │
	│ delete  │ -p binary-mirror-477717                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-477717 │ jenkins │ v1.37.0 │ 19 Oct 25 12:06 UTC │ 19 Oct 25 12:06 UTC │
	│ addons  │ enable dashboard -p addons-360741                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:06 UTC │                     │
	│ addons  │ disable dashboard -p addons-360741                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:06 UTC │                     │
	│ start   │ -p addons-360741 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:06 UTC │ 19 Oct 25 12:10 UTC │
	│ addons  │ addons-360741 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:10 UTC │ 19 Oct 25 12:10 UTC │
	│ addons  │ addons-360741 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:10 UTC │ 19 Oct 25 12:10 UTC │
	│ addons  │ enable headlamp -p addons-360741 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:10 UTC │ 19 Oct 25 12:10 UTC │
	│ addons  │ addons-360741 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:10 UTC │ 19 Oct 25 12:10 UTC │
	│ addons  │ addons-360741 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:10 UTC │ 19 Oct 25 12:10 UTC │
	│ addons  │ addons-360741 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:10 UTC │ 19 Oct 25 12:10 UTC │
	│ ip      │ addons-360741 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:10 UTC │ 19 Oct 25 12:10 UTC │
	│ addons  │ addons-360741 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:10 UTC │ 19 Oct 25 12:10 UTC │
	│ ssh     │ addons-360741 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:10 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-360741                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:10 UTC │ 19 Oct 25 12:10 UTC │
	│ addons  │ addons-360741 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:10 UTC │ 19 Oct 25 12:10 UTC │
	│ addons  │ addons-360741 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:10 UTC │ 19 Oct 25 12:10 UTC │
	│ addons  │ addons-360741 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:10 UTC │ 19 Oct 25 12:10 UTC │
	│ addons  │ addons-360741 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:10 UTC │ 19 Oct 25 12:11 UTC │
	│ ssh     │ addons-360741 ssh cat /opt/local-path-provisioner/pvc-2f20e001-4597-4197-a480-b51b1d034e34_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:10 UTC │ 19 Oct 25 12:10 UTC │
	│ addons  │ addons-360741 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:11 UTC │ 19 Oct 25 12:11 UTC │
	│ addons  │ addons-360741 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:11 UTC │ 19 Oct 25 12:11 UTC │
	│ addons  │ addons-360741 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:11 UTC │ 19 Oct 25 12:11 UTC │
	│ ip      │ addons-360741 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-360741        │ jenkins │ v1.37.0 │ 19 Oct 25 12:12 UTC │ 19 Oct 25 12:12 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:06:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:06:44.872181  149430 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:06:44.872485  149430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:06:44.872496  149430 out.go:374] Setting ErrFile to fd 2...
	I1019 12:06:44.872500  149430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:06:44.872690  149430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-144655/.minikube/bin
	I1019 12:06:44.873176  149430 out.go:368] Setting JSON to false
	I1019 12:06:44.874680  149430 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2939,"bootTime":1760872666,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:06:44.874911  149430 start.go:141] virtualization: kvm guest
	I1019 12:06:44.876426  149430 out.go:179] * [addons-360741] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:06:44.877467  149430 notify.go:220] Checking for updates...
	I1019 12:06:44.877517  149430 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:06:44.878958  149430 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:06:44.880217  149430 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-144655/kubeconfig
	I1019 12:06:44.881220  149430 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-144655/.minikube
	I1019 12:06:44.882124  149430 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:06:44.883111  149430 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:06:44.884196  149430 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:06:44.913034  149430 out.go:179] * Using the kvm2 driver based on user configuration
	I1019 12:06:44.914067  149430 start.go:305] selected driver: kvm2
	I1019 12:06:44.914079  149430 start.go:925] validating driver "kvm2" against <nil>
	I1019 12:06:44.914089  149430 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:06:44.914812  149430 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:06:44.914881  149430 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21772-144655/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 12:06:44.928448  149430 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 12:06:44.928471  149430 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21772-144655/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 12:06:44.941086  149430 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 12:06:44.941122  149430 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 12:06:44.941418  149430 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:06:44.941446  149430 cni.go:84] Creating CNI manager for ""
	I1019 12:06:44.941491  149430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 12:06:44.941499  149430 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1019 12:06:44.941547  149430 start.go:349] cluster config:
	{Name:addons-360741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-360741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1019 12:06:44.941633  149430 iso.go:125] acquiring lock: {Name:mk95990edcd162f08eff1d65580753d7d9806693 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:06:44.943124  149430 out.go:179] * Starting "addons-360741" primary control-plane node in "addons-360741" cluster
	I1019 12:06:44.944353  149430 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:06:44.944383  149430 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-144655/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 12:06:44.944391  149430 cache.go:58] Caching tarball of preloaded images
	I1019 12:06:44.944482  149430 preload.go:233] Found /home/jenkins/minikube-integration/21772-144655/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 12:06:44.944493  149430 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:06:44.944810  149430 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/config.json ...
	I1019 12:06:44.944832  149430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/config.json: {Name:mk18f3504ed95cccff9e142adadf7d58dc5dd733 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:06:44.944957  149430 start.go:360] acquireMachinesLock for addons-360741: {Name:mk205e9aa7c82fb04c974fad7345827c2806baf1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1019 12:06:44.944997  149430 start.go:364] duration metric: took 27.914µs to acquireMachinesLock for "addons-360741"
	I1019 12:06:44.945013  149430 start.go:93] Provisioning new machine with config: &{Name:addons-360741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-360741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:06:44.945072  149430 start.go:125] createHost starting for "" (driver="kvm2")
	I1019 12:06:44.946476  149430 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1019 12:06:44.946611  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:06:44.946646  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:06:44.959857  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44801
	I1019 12:06:44.960385  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:06:44.960965  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:06:44.960988  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:06:44.961392  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:06:44.961586  149430 main.go:141] libmachine: (addons-360741) Calling .GetMachineName
	I1019 12:06:44.961721  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:06:44.961874  149430 start.go:159] libmachine.API.Create for "addons-360741" (driver="kvm2")
	I1019 12:06:44.961906  149430 client.go:168] LocalClient.Create starting
	I1019 12:06:44.961942  149430 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca.pem
	I1019 12:06:45.025561  149430 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/cert.pem
	I1019 12:06:45.407560  149430 main.go:141] libmachine: Running pre-create checks...
	I1019 12:06:45.407584  149430 main.go:141] libmachine: (addons-360741) Calling .PreCreateCheck
	I1019 12:06:45.408074  149430 main.go:141] libmachine: (addons-360741) Calling .GetConfigRaw
	I1019 12:06:45.408480  149430 main.go:141] libmachine: Creating machine...
	I1019 12:06:45.408493  149430 main.go:141] libmachine: (addons-360741) Calling .Create
	I1019 12:06:45.408707  149430 main.go:141] libmachine: (addons-360741) creating domain...
	I1019 12:06:45.408733  149430 main.go:141] libmachine: (addons-360741) creating network...
	I1019 12:06:45.410218  149430 main.go:141] libmachine: (addons-360741) DBG | found existing default network
	I1019 12:06:45.410406  149430 main.go:141] libmachine: (addons-360741) DBG | <network>
	I1019 12:06:45.410415  149430 main.go:141] libmachine: (addons-360741) DBG |   <name>default</name>
	I1019 12:06:45.410422  149430 main.go:141] libmachine: (addons-360741) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1019 12:06:45.410433  149430 main.go:141] libmachine: (addons-360741) DBG |   <forward mode='nat'>
	I1019 12:06:45.410441  149430 main.go:141] libmachine: (addons-360741) DBG |     <nat>
	I1019 12:06:45.410448  149430 main.go:141] libmachine: (addons-360741) DBG |       <port start='1024' end='65535'/>
	I1019 12:06:45.410456  149430 main.go:141] libmachine: (addons-360741) DBG |     </nat>
	I1019 12:06:45.410470  149430 main.go:141] libmachine: (addons-360741) DBG |   </forward>
	I1019 12:06:45.410480  149430 main.go:141] libmachine: (addons-360741) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1019 12:06:45.410495  149430 main.go:141] libmachine: (addons-360741) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1019 12:06:45.410506  149430 main.go:141] libmachine: (addons-360741) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1019 12:06:45.410517  149430 main.go:141] libmachine: (addons-360741) DBG |     <dhcp>
	I1019 12:06:45.410531  149430 main.go:141] libmachine: (addons-360741) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1019 12:06:45.410539  149430 main.go:141] libmachine: (addons-360741) DBG |     </dhcp>
	I1019 12:06:45.410546  149430 main.go:141] libmachine: (addons-360741) DBG |   </ip>
	I1019 12:06:45.410554  149430 main.go:141] libmachine: (addons-360741) DBG | </network>
	I1019 12:06:45.410560  149430 main.go:141] libmachine: (addons-360741) DBG | 
	I1019 12:06:45.411921  149430 main.go:141] libmachine: (addons-360741) DBG | I1019 12:06:45.411731  149459 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000210dc0}
	I1019 12:06:45.411976  149430 main.go:141] libmachine: (addons-360741) DBG | defining private network:
	I1019 12:06:45.411995  149430 main.go:141] libmachine: (addons-360741) DBG | 
	I1019 12:06:45.412011  149430 main.go:141] libmachine: (addons-360741) DBG | <network>
	I1019 12:06:45.412017  149430 main.go:141] libmachine: (addons-360741) DBG |   <name>mk-addons-360741</name>
	I1019 12:06:45.412022  149430 main.go:141] libmachine: (addons-360741) DBG |   <dns enable='no'/>
	I1019 12:06:45.412028  149430 main.go:141] libmachine: (addons-360741) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1019 12:06:45.412034  149430 main.go:141] libmachine: (addons-360741) DBG |     <dhcp>
	I1019 12:06:45.412039  149430 main.go:141] libmachine: (addons-360741) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1019 12:06:45.412044  149430 main.go:141] libmachine: (addons-360741) DBG |     </dhcp>
	I1019 12:06:45.412050  149430 main.go:141] libmachine: (addons-360741) DBG |   </ip>
	I1019 12:06:45.412061  149430 main.go:141] libmachine: (addons-360741) DBG | </network>
	I1019 12:06:45.412067  149430 main.go:141] libmachine: (addons-360741) DBG | 
	I1019 12:06:45.417547  149430 main.go:141] libmachine: (addons-360741) DBG | creating private network mk-addons-360741 192.168.39.0/24...
	I1019 12:06:45.481324  149430 main.go:141] libmachine: (addons-360741) DBG | private network mk-addons-360741 192.168.39.0/24 created
	I1019 12:06:45.481581  149430 main.go:141] libmachine: (addons-360741) DBG | <network>
	I1019 12:06:45.481602  149430 main.go:141] libmachine: (addons-360741) DBG |   <name>mk-addons-360741</name>
	I1019 12:06:45.481611  149430 main.go:141] libmachine: (addons-360741) setting up store path in /home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741 ...
	I1019 12:06:45.481640  149430 main.go:141] libmachine: (addons-360741) building disk image from file:///home/jenkins/minikube-integration/21772-144655/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1019 12:06:45.481657  149430 main.go:141] libmachine: (addons-360741) DBG |   <uuid>d9de1fe1-26cd-4e6c-b66b-7bcc565cc3c4</uuid>
	I1019 12:06:45.481668  149430 main.go:141] libmachine: (addons-360741) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1019 12:06:45.481678  149430 main.go:141] libmachine: (addons-360741) DBG |   <mac address='52:54:00:3a:4a:0d'/>
	I1019 12:06:45.481692  149430 main.go:141] libmachine: (addons-360741) DBG |   <dns enable='no'/>
	I1019 12:06:45.481704  149430 main.go:141] libmachine: (addons-360741) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1019 12:06:45.481712  149430 main.go:141] libmachine: (addons-360741) DBG |     <dhcp>
	I1019 12:06:45.481721  149430 main.go:141] libmachine: (addons-360741) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1019 12:06:45.481731  149430 main.go:141] libmachine: (addons-360741) DBG |     </dhcp>
	I1019 12:06:45.481739  149430 main.go:141] libmachine: (addons-360741) DBG |   </ip>
	I1019 12:06:45.481749  149430 main.go:141] libmachine: (addons-360741) DBG | </network>
	I1019 12:06:45.481765  149430 main.go:141] libmachine: (addons-360741) Downloading /home/jenkins/minikube-integration/21772-144655/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21772-144655/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1019 12:06:45.481784  149430 main.go:141] libmachine: (addons-360741) DBG | 
	I1019 12:06:45.481802  149430 main.go:141] libmachine: (addons-360741) DBG | I1019 12:06:45.481575  149459 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21772-144655/.minikube
	I1019 12:06:45.782662  149430 main.go:141] libmachine: (addons-360741) DBG | I1019 12:06:45.782459  149459 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa...
	I1019 12:06:45.969630  149430 main.go:141] libmachine: (addons-360741) DBG | I1019 12:06:45.969494  149459 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/addons-360741.rawdisk...
	I1019 12:06:45.969662  149430 main.go:141] libmachine: (addons-360741) DBG | Writing magic tar header
	I1019 12:06:45.969689  149430 main.go:141] libmachine: (addons-360741) DBG | Writing SSH key tar header
	I1019 12:06:45.969699  149430 main.go:141] libmachine: (addons-360741) DBG | I1019 12:06:45.969634  149459 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741 ...
	I1019 12:06:45.969758  149430 main.go:141] libmachine: (addons-360741) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741
	I1019 12:06:45.969777  149430 main.go:141] libmachine: (addons-360741) setting executable bit set on /home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741 (perms=drwx------)
	I1019 12:06:45.969824  149430 main.go:141] libmachine: (addons-360741) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21772-144655/.minikube/machines
	I1019 12:06:45.969849  149430 main.go:141] libmachine: (addons-360741) setting executable bit set on /home/jenkins/minikube-integration/21772-144655/.minikube/machines (perms=drwxr-xr-x)
	I1019 12:06:45.969861  149430 main.go:141] libmachine: (addons-360741) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21772-144655/.minikube
	I1019 12:06:45.969873  149430 main.go:141] libmachine: (addons-360741) setting executable bit set on /home/jenkins/minikube-integration/21772-144655/.minikube (perms=drwxr-xr-x)
	I1019 12:06:45.969886  149430 main.go:141] libmachine: (addons-360741) setting executable bit set on /home/jenkins/minikube-integration/21772-144655 (perms=drwxrwxr-x)
	I1019 12:06:45.969892  149430 main.go:141] libmachine: (addons-360741) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1019 12:06:45.969899  149430 main.go:141] libmachine: (addons-360741) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1019 12:06:45.969904  149430 main.go:141] libmachine: (addons-360741) defining domain...
	I1019 12:06:45.969930  149430 main.go:141] libmachine: (addons-360741) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21772-144655
	I1019 12:06:45.969949  149430 main.go:141] libmachine: (addons-360741) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1019 12:06:45.969959  149430 main.go:141] libmachine: (addons-360741) DBG | checking permissions on dir: /home/jenkins
	I1019 12:06:45.969966  149430 main.go:141] libmachine: (addons-360741) DBG | checking permissions on dir: /home
	I1019 12:06:45.969974  149430 main.go:141] libmachine: (addons-360741) DBG | skipping /home - not owner
	I1019 12:06:45.970938  149430 main.go:141] libmachine: (addons-360741) defining domain using XML: 
	I1019 12:06:45.970969  149430 main.go:141] libmachine: (addons-360741) <domain type='kvm'>
	I1019 12:06:45.970980  149430 main.go:141] libmachine: (addons-360741)   <name>addons-360741</name>
	I1019 12:06:45.970988  149430 main.go:141] libmachine: (addons-360741)   <memory unit='MiB'>4096</memory>
	I1019 12:06:45.970996  149430 main.go:141] libmachine: (addons-360741)   <vcpu>2</vcpu>
	I1019 12:06:45.971002  149430 main.go:141] libmachine: (addons-360741)   <features>
	I1019 12:06:45.971019  149430 main.go:141] libmachine: (addons-360741)     <acpi/>
	I1019 12:06:45.971027  149430 main.go:141] libmachine: (addons-360741)     <apic/>
	I1019 12:06:45.971036  149430 main.go:141] libmachine: (addons-360741)     <pae/>
	I1019 12:06:45.971041  149430 main.go:141] libmachine: (addons-360741)   </features>
	I1019 12:06:45.971050  149430 main.go:141] libmachine: (addons-360741)   <cpu mode='host-passthrough'>
	I1019 12:06:45.971061  149430 main.go:141] libmachine: (addons-360741)   </cpu>
	I1019 12:06:45.971070  149430 main.go:141] libmachine: (addons-360741)   <os>
	I1019 12:06:45.971084  149430 main.go:141] libmachine: (addons-360741)     <type>hvm</type>
	I1019 12:06:45.971093  149430 main.go:141] libmachine: (addons-360741)     <boot dev='cdrom'/>
	I1019 12:06:45.971105  149430 main.go:141] libmachine: (addons-360741)     <boot dev='hd'/>
	I1019 12:06:45.971114  149430 main.go:141] libmachine: (addons-360741)     <bootmenu enable='no'/>
	I1019 12:06:45.971124  149430 main.go:141] libmachine: (addons-360741)   </os>
	I1019 12:06:45.971131  149430 main.go:141] libmachine: (addons-360741)   <devices>
	I1019 12:06:45.971138  149430 main.go:141] libmachine: (addons-360741)     <disk type='file' device='cdrom'>
	I1019 12:06:45.971150  149430 main.go:141] libmachine: (addons-360741)       <source file='/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/boot2docker.iso'/>
	I1019 12:06:45.971157  149430 main.go:141] libmachine: (addons-360741)       <target dev='hdc' bus='scsi'/>
	I1019 12:06:45.971161  149430 main.go:141] libmachine: (addons-360741)       <readonly/>
	I1019 12:06:45.971170  149430 main.go:141] libmachine: (addons-360741)     </disk>
	I1019 12:06:45.971179  149430 main.go:141] libmachine: (addons-360741)     <disk type='file' device='disk'>
	I1019 12:06:45.971195  149430 main.go:141] libmachine: (addons-360741)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1019 12:06:45.971216  149430 main.go:141] libmachine: (addons-360741)       <source file='/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/addons-360741.rawdisk'/>
	I1019 12:06:45.971226  149430 main.go:141] libmachine: (addons-360741)       <target dev='hda' bus='virtio'/>
	I1019 12:06:45.971234  149430 main.go:141] libmachine: (addons-360741)     </disk>
	I1019 12:06:45.971244  149430 main.go:141] libmachine: (addons-360741)     <interface type='network'>
	I1019 12:06:45.971253  149430 main.go:141] libmachine: (addons-360741)       <source network='mk-addons-360741'/>
	I1019 12:06:45.971263  149430 main.go:141] libmachine: (addons-360741)       <model type='virtio'/>
	I1019 12:06:45.971273  149430 main.go:141] libmachine: (addons-360741)     </interface>
	I1019 12:06:45.971303  149430 main.go:141] libmachine: (addons-360741)     <interface type='network'>
	I1019 12:06:45.971325  149430 main.go:141] libmachine: (addons-360741)       <source network='default'/>
	I1019 12:06:45.971335  149430 main.go:141] libmachine: (addons-360741)       <model type='virtio'/>
	I1019 12:06:45.971344  149430 main.go:141] libmachine: (addons-360741)     </interface>
	I1019 12:06:45.971353  149430 main.go:141] libmachine: (addons-360741)     <serial type='pty'>
	I1019 12:06:45.971361  149430 main.go:141] libmachine: (addons-360741)       <target port='0'/>
	I1019 12:06:45.971376  149430 main.go:141] libmachine: (addons-360741)     </serial>
	I1019 12:06:45.971388  149430 main.go:141] libmachine: (addons-360741)     <console type='pty'>
	I1019 12:06:45.971398  149430 main.go:141] libmachine: (addons-360741)       <target type='serial' port='0'/>
	I1019 12:06:45.971409  149430 main.go:141] libmachine: (addons-360741)     </console>
	I1019 12:06:45.971418  149430 main.go:141] libmachine: (addons-360741)     <rng model='virtio'>
	I1019 12:06:45.971427  149430 main.go:141] libmachine: (addons-360741)       <backend model='random'>/dev/random</backend>
	I1019 12:06:45.971437  149430 main.go:141] libmachine: (addons-360741)     </rng>
	I1019 12:06:45.971456  149430 main.go:141] libmachine: (addons-360741)   </devices>
	I1019 12:06:45.971465  149430 main.go:141] libmachine: (addons-360741) </domain>
	I1019 12:06:45.971497  149430 main.go:141] libmachine: (addons-360741) 
	I1019 12:06:45.977987  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:af:31:3c in network default
	I1019 12:06:45.978651  149430 main.go:141] libmachine: (addons-360741) starting domain...
	I1019 12:06:45.978672  149430 main.go:141] libmachine: (addons-360741) ensuring networks are active...
	I1019 12:06:45.978684  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:06:45.979397  149430 main.go:141] libmachine: (addons-360741) Ensuring network default is active
	I1019 12:06:45.979820  149430 main.go:141] libmachine: (addons-360741) Ensuring network mk-addons-360741 is active
	I1019 12:06:45.980436  149430 main.go:141] libmachine: (addons-360741) getting domain XML...
	I1019 12:06:45.981314  149430 main.go:141] libmachine: (addons-360741) DBG | starting domain XML:
	I1019 12:06:45.981336  149430 main.go:141] libmachine: (addons-360741) DBG | <domain type='kvm'>
	I1019 12:06:45.981346  149430 main.go:141] libmachine: (addons-360741) DBG |   <name>addons-360741</name>
	I1019 12:06:45.981354  149430 main.go:141] libmachine: (addons-360741) DBG |   <uuid>0563e0d0-9896-4b39-a296-0acfe6f230dc</uuid>
	I1019 12:06:45.981363  149430 main.go:141] libmachine: (addons-360741) DBG |   <memory unit='KiB'>4194304</memory>
	I1019 12:06:45.981370  149430 main.go:141] libmachine: (addons-360741) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1019 12:06:45.981376  149430 main.go:141] libmachine: (addons-360741) DBG |   <vcpu placement='static'>2</vcpu>
	I1019 12:06:45.981380  149430 main.go:141] libmachine: (addons-360741) DBG |   <os>
	I1019 12:06:45.981385  149430 main.go:141] libmachine: (addons-360741) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1019 12:06:45.981389  149430 main.go:141] libmachine: (addons-360741) DBG |     <boot dev='cdrom'/>
	I1019 12:06:45.981400  149430 main.go:141] libmachine: (addons-360741) DBG |     <boot dev='hd'/>
	I1019 12:06:45.981405  149430 main.go:141] libmachine: (addons-360741) DBG |     <bootmenu enable='no'/>
	I1019 12:06:45.981409  149430 main.go:141] libmachine: (addons-360741) DBG |   </os>
	I1019 12:06:45.981413  149430 main.go:141] libmachine: (addons-360741) DBG |   <features>
	I1019 12:06:45.981418  149430 main.go:141] libmachine: (addons-360741) DBG |     <acpi/>
	I1019 12:06:45.981422  149430 main.go:141] libmachine: (addons-360741) DBG |     <apic/>
	I1019 12:06:45.981432  149430 main.go:141] libmachine: (addons-360741) DBG |     <pae/>
	I1019 12:06:45.981439  149430 main.go:141] libmachine: (addons-360741) DBG |   </features>
	I1019 12:06:45.981447  149430 main.go:141] libmachine: (addons-360741) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1019 12:06:45.981454  149430 main.go:141] libmachine: (addons-360741) DBG |   <clock offset='utc'/>
	I1019 12:06:45.981479  149430 main.go:141] libmachine: (addons-360741) DBG |   <on_poweroff>destroy</on_poweroff>
	I1019 12:06:45.981503  149430 main.go:141] libmachine: (addons-360741) DBG |   <on_reboot>restart</on_reboot>
	I1019 12:06:45.981516  149430 main.go:141] libmachine: (addons-360741) DBG |   <on_crash>destroy</on_crash>
	I1019 12:06:45.981526  149430 main.go:141] libmachine: (addons-360741) DBG |   <devices>
	I1019 12:06:45.981536  149430 main.go:141] libmachine: (addons-360741) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1019 12:06:45.981543  149430 main.go:141] libmachine: (addons-360741) DBG |     <disk type='file' device='cdrom'>
	I1019 12:06:45.981549  149430 main.go:141] libmachine: (addons-360741) DBG |       <driver name='qemu' type='raw'/>
	I1019 12:06:45.981562  149430 main.go:141] libmachine: (addons-360741) DBG |       <source file='/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/boot2docker.iso'/>
	I1019 12:06:45.981574  149430 main.go:141] libmachine: (addons-360741) DBG |       <target dev='hdc' bus='scsi'/>
	I1019 12:06:45.981585  149430 main.go:141] libmachine: (addons-360741) DBG |       <readonly/>
	I1019 12:06:45.981600  149430 main.go:141] libmachine: (addons-360741) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1019 12:06:45.981622  149430 main.go:141] libmachine: (addons-360741) DBG |     </disk>
	I1019 12:06:45.981634  149430 main.go:141] libmachine: (addons-360741) DBG |     <disk type='file' device='disk'>
	I1019 12:06:45.981640  149430 main.go:141] libmachine: (addons-360741) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1019 12:06:45.981668  149430 main.go:141] libmachine: (addons-360741) DBG |       <source file='/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/addons-360741.rawdisk'/>
	I1019 12:06:45.981697  149430 main.go:141] libmachine: (addons-360741) DBG |       <target dev='hda' bus='virtio'/>
	I1019 12:06:45.981714  149430 main.go:141] libmachine: (addons-360741) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1019 12:06:45.981722  149430 main.go:141] libmachine: (addons-360741) DBG |     </disk>
	I1019 12:06:45.981728  149430 main.go:141] libmachine: (addons-360741) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1019 12:06:45.981734  149430 main.go:141] libmachine: (addons-360741) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1019 12:06:45.981742  149430 main.go:141] libmachine: (addons-360741) DBG |     </controller>
	I1019 12:06:45.981751  149430 main.go:141] libmachine: (addons-360741) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1019 12:06:45.981776  149430 main.go:141] libmachine: (addons-360741) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1019 12:06:45.981789  149430 main.go:141] libmachine: (addons-360741) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1019 12:06:45.981804  149430 main.go:141] libmachine: (addons-360741) DBG |     </controller>
	I1019 12:06:45.981814  149430 main.go:141] libmachine: (addons-360741) DBG |     <interface type='network'>
	I1019 12:06:45.981823  149430 main.go:141] libmachine: (addons-360741) DBG |       <mac address='52:54:00:04:80:77'/>
	I1019 12:06:45.981831  149430 main.go:141] libmachine: (addons-360741) DBG |       <source network='mk-addons-360741'/>
	I1019 12:06:45.981841  149430 main.go:141] libmachine: (addons-360741) DBG |       <model type='virtio'/>
	I1019 12:06:45.981850  149430 main.go:141] libmachine: (addons-360741) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1019 12:06:45.981861  149430 main.go:141] libmachine: (addons-360741) DBG |     </interface>
	I1019 12:06:45.981871  149430 main.go:141] libmachine: (addons-360741) DBG |     <interface type='network'>
	I1019 12:06:45.981880  149430 main.go:141] libmachine: (addons-360741) DBG |       <mac address='52:54:00:af:31:3c'/>
	I1019 12:06:45.981891  149430 main.go:141] libmachine: (addons-360741) DBG |       <source network='default'/>
	I1019 12:06:45.981908  149430 main.go:141] libmachine: (addons-360741) DBG |       <model type='virtio'/>
	I1019 12:06:45.981923  149430 main.go:141] libmachine: (addons-360741) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1019 12:06:45.981932  149430 main.go:141] libmachine: (addons-360741) DBG |     </interface>
	I1019 12:06:45.981936  149430 main.go:141] libmachine: (addons-360741) DBG |     <serial type='pty'>
	I1019 12:06:45.981945  149430 main.go:141] libmachine: (addons-360741) DBG |       <target type='isa-serial' port='0'>
	I1019 12:06:45.981949  149430 main.go:141] libmachine: (addons-360741) DBG |         <model name='isa-serial'/>
	I1019 12:06:45.981956  149430 main.go:141] libmachine: (addons-360741) DBG |       </target>
	I1019 12:06:45.981960  149430 main.go:141] libmachine: (addons-360741) DBG |     </serial>
	I1019 12:06:45.981979  149430 main.go:141] libmachine: (addons-360741) DBG |     <console type='pty'>
	I1019 12:06:45.981988  149430 main.go:141] libmachine: (addons-360741) DBG |       <target type='serial' port='0'/>
	I1019 12:06:45.981993  149430 main.go:141] libmachine: (addons-360741) DBG |     </console>
	I1019 12:06:45.982003  149430 main.go:141] libmachine: (addons-360741) DBG |     <input type='mouse' bus='ps2'/>
	I1019 12:06:45.982008  149430 main.go:141] libmachine: (addons-360741) DBG |     <input type='keyboard' bus='ps2'/>
	I1019 12:06:45.982020  149430 main.go:141] libmachine: (addons-360741) DBG |     <audio id='1' type='none'/>
	I1019 12:06:45.982026  149430 main.go:141] libmachine: (addons-360741) DBG |     <memballoon model='virtio'>
	I1019 12:06:45.982037  149430 main.go:141] libmachine: (addons-360741) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1019 12:06:45.982041  149430 main.go:141] libmachine: (addons-360741) DBG |     </memballoon>
	I1019 12:06:45.982049  149430 main.go:141] libmachine: (addons-360741) DBG |     <rng model='virtio'>
	I1019 12:06:45.982060  149430 main.go:141] libmachine: (addons-360741) DBG |       <backend model='random'>/dev/random</backend>
	I1019 12:06:45.982071  149430 main.go:141] libmachine: (addons-360741) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1019 12:06:45.982082  149430 main.go:141] libmachine: (addons-360741) DBG |     </rng>
	I1019 12:06:45.982091  149430 main.go:141] libmachine: (addons-360741) DBG |   </devices>
	I1019 12:06:45.982099  149430 main.go:141] libmachine: (addons-360741) DBG | </domain>
	I1019 12:06:45.982111  149430 main.go:141] libmachine: (addons-360741) DBG | 
	I1019 12:06:47.253002  149430 main.go:141] libmachine: (addons-360741) waiting for domain to start...
	I1019 12:06:47.254479  149430 main.go:141] libmachine: (addons-360741) domain is now running
	I1019 12:06:47.254508  149430 main.go:141] libmachine: (addons-360741) waiting for IP...
	I1019 12:06:47.255342  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:06:47.255835  149430 main.go:141] libmachine: (addons-360741) DBG | no network interface addresses found for domain addons-360741 (source=lease)
	I1019 12:06:47.255851  149430 main.go:141] libmachine: (addons-360741) DBG | trying to list again with source=arp
	I1019 12:06:47.256116  149430 main.go:141] libmachine: (addons-360741) DBG | unable to find current IP address of domain addons-360741 in network mk-addons-360741 (interfaces detected: [])
	I1019 12:06:47.256202  149430 main.go:141] libmachine: (addons-360741) DBG | I1019 12:06:47.256134  149459 retry.go:31] will retry after 234.930811ms: waiting for domain to come up
	I1019 12:06:47.492616  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:06:47.493140  149430 main.go:141] libmachine: (addons-360741) DBG | no network interface addresses found for domain addons-360741 (source=lease)
	I1019 12:06:47.493165  149430 main.go:141] libmachine: (addons-360741) DBG | trying to list again with source=arp
	I1019 12:06:47.493525  149430 main.go:141] libmachine: (addons-360741) DBG | unable to find current IP address of domain addons-360741 in network mk-addons-360741 (interfaces detected: [])
	I1019 12:06:47.493578  149430 main.go:141] libmachine: (addons-360741) DBG | I1019 12:06:47.493516  149459 retry.go:31] will retry after 251.795744ms: waiting for domain to come up
	I1019 12:06:47.747330  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:06:47.747802  149430 main.go:141] libmachine: (addons-360741) DBG | no network interface addresses found for domain addons-360741 (source=lease)
	I1019 12:06:47.747825  149430 main.go:141] libmachine: (addons-360741) DBG | trying to list again with source=arp
	I1019 12:06:47.748056  149430 main.go:141] libmachine: (addons-360741) DBG | unable to find current IP address of domain addons-360741 in network mk-addons-360741 (interfaces detected: [])
	I1019 12:06:47.748120  149430 main.go:141] libmachine: (addons-360741) DBG | I1019 12:06:47.748049  149459 retry.go:31] will retry after 371.127737ms: waiting for domain to come up
	I1019 12:06:48.120783  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:06:48.121313  149430 main.go:141] libmachine: (addons-360741) DBG | no network interface addresses found for domain addons-360741 (source=lease)
	I1019 12:06:48.121347  149430 main.go:141] libmachine: (addons-360741) DBG | trying to list again with source=arp
	I1019 12:06:48.121622  149430 main.go:141] libmachine: (addons-360741) DBG | unable to find current IP address of domain addons-360741 in network mk-addons-360741 (interfaces detected: [])
	I1019 12:06:48.121673  149430 main.go:141] libmachine: (addons-360741) DBG | I1019 12:06:48.121609  149459 retry.go:31] will retry after 573.825769ms: waiting for domain to come up
	I1019 12:06:48.697806  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:06:48.698364  149430 main.go:141] libmachine: (addons-360741) DBG | no network interface addresses found for domain addons-360741 (source=lease)
	I1019 12:06:48.698384  149430 main.go:141] libmachine: (addons-360741) DBG | trying to list again with source=arp
	I1019 12:06:48.698713  149430 main.go:141] libmachine: (addons-360741) DBG | unable to find current IP address of domain addons-360741 in network mk-addons-360741 (interfaces detected: [])
	I1019 12:06:48.698745  149430 main.go:141] libmachine: (addons-360741) DBG | I1019 12:06:48.698685  149459 retry.go:31] will retry after 596.079449ms: waiting for domain to come up
	I1019 12:06:49.296888  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:06:49.297465  149430 main.go:141] libmachine: (addons-360741) DBG | no network interface addresses found for domain addons-360741 (source=lease)
	I1019 12:06:49.297501  149430 main.go:141] libmachine: (addons-360741) DBG | trying to list again with source=arp
	I1019 12:06:49.297750  149430 main.go:141] libmachine: (addons-360741) DBG | unable to find current IP address of domain addons-360741 in network mk-addons-360741 (interfaces detected: [])
	I1019 12:06:49.297811  149430 main.go:141] libmachine: (addons-360741) DBG | I1019 12:06:49.297739  149459 retry.go:31] will retry after 628.136129ms: waiting for domain to come up
	I1019 12:06:49.927824  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:06:49.928338  149430 main.go:141] libmachine: (addons-360741) DBG | no network interface addresses found for domain addons-360741 (source=lease)
	I1019 12:06:49.928362  149430 main.go:141] libmachine: (addons-360741) DBG | trying to list again with source=arp
	I1019 12:06:49.928635  149430 main.go:141] libmachine: (addons-360741) DBG | unable to find current IP address of domain addons-360741 in network mk-addons-360741 (interfaces detected: [])
	I1019 12:06:49.928722  149430 main.go:141] libmachine: (addons-360741) DBG | I1019 12:06:49.928651  149459 retry.go:31] will retry after 1.060292297s: waiting for domain to come up
	I1019 12:06:50.990561  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:06:50.991049  149430 main.go:141] libmachine: (addons-360741) DBG | no network interface addresses found for domain addons-360741 (source=lease)
	I1019 12:06:50.991081  149430 main.go:141] libmachine: (addons-360741) DBG | trying to list again with source=arp
	I1019 12:06:50.991270  149430 main.go:141] libmachine: (addons-360741) DBG | unable to find current IP address of domain addons-360741 in network mk-addons-360741 (interfaces detected: [])
	I1019 12:06:50.991352  149430 main.go:141] libmachine: (addons-360741) DBG | I1019 12:06:50.991265  149459 retry.go:31] will retry after 1.090021618s: waiting for domain to come up
	I1019 12:06:52.082571  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:06:52.083079  149430 main.go:141] libmachine: (addons-360741) DBG | no network interface addresses found for domain addons-360741 (source=lease)
	I1019 12:06:52.083109  149430 main.go:141] libmachine: (addons-360741) DBG | trying to list again with source=arp
	I1019 12:06:52.083384  149430 main.go:141] libmachine: (addons-360741) DBG | unable to find current IP address of domain addons-360741 in network mk-addons-360741 (interfaces detected: [])
	I1019 12:06:52.083413  149430 main.go:141] libmachine: (addons-360741) DBG | I1019 12:06:52.083341  149459 retry.go:31] will retry after 1.372557894s: waiting for domain to come up
	I1019 12:06:53.457912  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:06:53.458458  149430 main.go:141] libmachine: (addons-360741) DBG | no network interface addresses found for domain addons-360741 (source=lease)
	I1019 12:06:53.458504  149430 main.go:141] libmachine: (addons-360741) DBG | trying to list again with source=arp
	I1019 12:06:53.458712  149430 main.go:141] libmachine: (addons-360741) DBG | unable to find current IP address of domain addons-360741 in network mk-addons-360741 (interfaces detected: [])
	I1019 12:06:53.458768  149430 main.go:141] libmachine: (addons-360741) DBG | I1019 12:06:53.458700  149459 retry.go:31] will retry after 1.753352597s: waiting for domain to come up
	I1019 12:06:55.213267  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:06:55.213785  149430 main.go:141] libmachine: (addons-360741) DBG | no network interface addresses found for domain addons-360741 (source=lease)
	I1019 12:06:55.213812  149430 main.go:141] libmachine: (addons-360741) DBG | trying to list again with source=arp
	I1019 12:06:55.214082  149430 main.go:141] libmachine: (addons-360741) DBG | unable to find current IP address of domain addons-360741 in network mk-addons-360741 (interfaces detected: [])
	I1019 12:06:55.214108  149430 main.go:141] libmachine: (addons-360741) DBG | I1019 12:06:55.214059  149459 retry.go:31] will retry after 2.332760069s: waiting for domain to come up
	I1019 12:06:57.549700  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:06:57.550074  149430 main.go:141] libmachine: (addons-360741) DBG | no network interface addresses found for domain addons-360741 (source=lease)
	I1019 12:06:57.550096  149430 main.go:141] libmachine: (addons-360741) DBG | trying to list again with source=arp
	I1019 12:06:57.550355  149430 main.go:141] libmachine: (addons-360741) DBG | unable to find current IP address of domain addons-360741 in network mk-addons-360741 (interfaces detected: [])
	I1019 12:06:57.550411  149430 main.go:141] libmachine: (addons-360741) DBG | I1019 12:06:57.550353  149459 retry.go:31] will retry after 3.358714016s: waiting for domain to come up
	I1019 12:07:00.910327  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:00.910821  149430 main.go:141] libmachine: (addons-360741) found domain IP: 192.168.39.35
	I1019 12:07:00.910845  149430 main.go:141] libmachine: (addons-360741) reserving static IP address...
	I1019 12:07:00.910857  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has current primary IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:00.911215  149430 main.go:141] libmachine: (addons-360741) DBG | unable to find host DHCP lease matching {name: "addons-360741", mac: "52:54:00:04:80:77", ip: "192.168.39.35"} in network mk-addons-360741
	I1019 12:07:01.091171  149430 main.go:141] libmachine: (addons-360741) DBG | Getting to WaitForSSH function...
	I1019 12:07:01.091210  149430 main.go:141] libmachine: (addons-360741) reserved static IP address 192.168.39.35 for domain addons-360741
	I1019 12:07:01.091253  149430 main.go:141] libmachine: (addons-360741) waiting for SSH...
	I1019 12:07:01.093683  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:01.094091  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:minikube Clientid:01:52:54:00:04:80:77}
	I1019 12:07:01.094110  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:01.094389  149430 main.go:141] libmachine: (addons-360741) DBG | Using SSH client type: external
	I1019 12:07:01.094419  149430 main.go:141] libmachine: (addons-360741) DBG | Using SSH private key: /home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa (-rw-------)
	I1019 12:07:01.094466  149430 main.go:141] libmachine: (addons-360741) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.35 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1019 12:07:01.094479  149430 main.go:141] libmachine: (addons-360741) DBG | About to run SSH command:
	I1019 12:07:01.094504  149430 main.go:141] libmachine: (addons-360741) DBG | exit 0
	I1019 12:07:01.225351  149430 main.go:141] libmachine: (addons-360741) DBG | SSH cmd err, output: <nil>: 
	I1019 12:07:01.225636  149430 main.go:141] libmachine: (addons-360741) domain creation complete
	I1019 12:07:01.225921  149430 main.go:141] libmachine: (addons-360741) Calling .GetConfigRaw
	I1019 12:07:01.226566  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:01.226770  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:01.226942  149430 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1019 12:07:01.226958  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:01.228314  149430 main.go:141] libmachine: Detecting operating system of created instance...
	I1019 12:07:01.228342  149430 main.go:141] libmachine: Waiting for SSH to be available...
	I1019 12:07:01.228351  149430 main.go:141] libmachine: Getting to WaitForSSH function...
	I1019 12:07:01.228362  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:01.230657  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:01.231052  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:01.231092  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:01.231258  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:01.231442  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:01.231605  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:01.231742  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:01.231899  149430 main.go:141] libmachine: Using SSH client type: native
	I1019 12:07:01.232140  149430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1019 12:07:01.232149  149430 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1019 12:07:01.333319  149430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:07:01.333353  149430 main.go:141] libmachine: Detecting the provisioner...
	I1019 12:07:01.333366  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:01.336610  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:01.336981  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:01.337010  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:01.337224  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:01.337442  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:01.337602  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:01.337745  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:01.337895  149430 main.go:141] libmachine: Using SSH client type: native
	I1019 12:07:01.338116  149430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1019 12:07:01.338128  149430 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1019 12:07:01.445846  149430 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1019 12:07:01.445914  149430 main.go:141] libmachine: found compatible host: buildroot
	I1019 12:07:01.445921  149430 main.go:141] libmachine: Provisioning with buildroot...
	I1019 12:07:01.445929  149430 main.go:141] libmachine: (addons-360741) Calling .GetMachineName
	I1019 12:07:01.446221  149430 buildroot.go:166] provisioning hostname "addons-360741"
	I1019 12:07:01.446298  149430 main.go:141] libmachine: (addons-360741) Calling .GetMachineName
	I1019 12:07:01.446516  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:01.449182  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:01.449555  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:01.449578  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:01.449758  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:01.449933  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:01.450120  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:01.450248  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:01.450427  149430 main.go:141] libmachine: Using SSH client type: native
	I1019 12:07:01.450632  149430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1019 12:07:01.450644  149430 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-360741 && echo "addons-360741" | sudo tee /etc/hostname
	I1019 12:07:01.565086  149430 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-360741
	
	I1019 12:07:01.565114  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:01.568231  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:01.568610  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:01.568634  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:01.568842  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:01.569037  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:01.569202  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:01.569343  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:01.569496  149430 main.go:141] libmachine: Using SSH client type: native
	I1019 12:07:01.569768  149430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1019 12:07:01.569792  149430 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-360741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-360741/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-360741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:07:01.679416  149430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:07:01.679451  149430 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21772-144655/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-144655/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-144655/.minikube}
	I1019 12:07:01.679488  149430 buildroot.go:174] setting up certificates
	I1019 12:07:01.679502  149430 provision.go:84] configureAuth start
	I1019 12:07:01.679531  149430 main.go:141] libmachine: (addons-360741) Calling .GetMachineName
	I1019 12:07:01.679875  149430 main.go:141] libmachine: (addons-360741) Calling .GetIP
	I1019 12:07:01.682877  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:01.683291  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:01.683322  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:01.683442  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:01.685718  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:01.686038  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:01.686064  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:01.686203  149430 provision.go:143] copyHostCerts
	I1019 12:07:01.686272  149430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-144655/.minikube/ca.pem (1078 bytes)
	I1019 12:07:01.686461  149430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-144655/.minikube/cert.pem (1123 bytes)
	I1019 12:07:01.686544  149430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-144655/.minikube/key.pem (1675 bytes)
	I1019 12:07:01.686594  149430 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-144655/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca-key.pem org=jenkins.addons-360741 san=[127.0.0.1 192.168.39.35 addons-360741 localhost minikube]
	I1019 12:07:01.855112  149430 provision.go:177] copyRemoteCerts
	I1019 12:07:01.855191  149430 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:07:01.855233  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:01.857986  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:01.858331  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:01.858362  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:01.858548  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:01.858727  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:01.858867  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:01.859012  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	I1019 12:07:01.938414  149430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1019 12:07:01.965498  149430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1019 12:07:01.991590  149430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 12:07:02.018196  149430 provision.go:87] duration metric: took 338.651685ms to configureAuth
	I1019 12:07:02.018227  149430 buildroot.go:189] setting minikube options for container-runtime
	I1019 12:07:02.018426  149430 config.go:182] Loaded profile config "addons-360741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:07:02.018547  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:02.021687  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:02.022083  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:02.022109  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:02.022307  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:02.022515  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:02.022659  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:02.022784  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:02.022957  149430 main.go:141] libmachine: Using SSH client type: native
	I1019 12:07:02.023213  149430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1019 12:07:02.023240  149430 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:07:02.241021  149430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:07:02.241052  149430 main.go:141] libmachine: Checking connection to Docker...
	I1019 12:07:02.241064  149430 main.go:141] libmachine: (addons-360741) Calling .GetURL
	I1019 12:07:02.242426  149430 main.go:141] libmachine: (addons-360741) DBG | using libvirt version 8000000
	I1019 12:07:02.244932  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:02.245904  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:02.245937  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:02.246189  149430 main.go:141] libmachine: Docker is up and running!
	I1019 12:07:02.246207  149430 main.go:141] libmachine: Reticulating splines...
	I1019 12:07:02.246216  149430 client.go:171] duration metric: took 17.284299371s to LocalClient.Create
	I1019 12:07:02.246244  149430 start.go:167] duration metric: took 17.284370422s to libmachine.API.Create "addons-360741"
	I1019 12:07:02.246257  149430 start.go:293] postStartSetup for "addons-360741" (driver="kvm2")
	I1019 12:07:02.246271  149430 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:07:02.246329  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:02.246598  149430 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:07:02.246644  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:02.249111  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:02.249452  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:02.249475  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:02.249632  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:02.249825  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:02.249979  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:02.250096  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	I1019 12:07:02.333882  149430 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:07:02.339436  149430 info.go:137] Remote host: Buildroot 2025.02
	I1019 12:07:02.339471  149430 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-144655/.minikube/addons for local assets ...
	I1019 12:07:02.339552  149430 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-144655/.minikube/files for local assets ...
	I1019 12:07:02.339577  149430 start.go:296] duration metric: took 93.313232ms for postStartSetup
	I1019 12:07:02.339612  149430 main.go:141] libmachine: (addons-360741) Calling .GetConfigRaw
	I1019 12:07:02.340211  149430 main.go:141] libmachine: (addons-360741) Calling .GetIP
	I1019 12:07:02.342972  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:02.343348  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:02.343377  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:02.343600  149430 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/config.json ...
	I1019 12:07:02.343775  149430 start.go:128] duration metric: took 17.398694248s to createHost
	I1019 12:07:02.343798  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:02.346126  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:02.346476  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:02.346503  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:02.346639  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:02.346794  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:02.346906  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:02.347069  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:02.347228  149430 main.go:141] libmachine: Using SSH client type: native
	I1019 12:07:02.347465  149430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1019 12:07:02.347477  149430 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1019 12:07:02.451237  149430 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760875622.421247728
	
	I1019 12:07:02.451263  149430 fix.go:216] guest clock: 1760875622.421247728
	I1019 12:07:02.451270  149430 fix.go:229] Guest: 2025-10-19 12:07:02.421247728 +0000 UTC Remote: 2025-10-19 12:07:02.343786788 +0000 UTC m=+17.507428305 (delta=77.46094ms)
	I1019 12:07:02.451355  149430 fix.go:200] guest clock delta is within tolerance: 77.46094ms
	I1019 12:07:02.451363  149430 start.go:83] releasing machines lock for "addons-360741", held for 17.506357314s
	I1019 12:07:02.451394  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:02.451658  149430 main.go:141] libmachine: (addons-360741) Calling .GetIP
	I1019 12:07:02.454302  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:02.454696  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:02.454726  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:02.454917  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:02.455532  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:02.455753  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:02.455904  149430 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:07:02.455959  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:02.455996  149430 ssh_runner.go:195] Run: cat /version.json
	I1019 12:07:02.456023  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:02.459113  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:02.459224  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:02.459552  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:02.459574  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:02.459601  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:02.459616  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:02.459693  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:02.459877  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:02.459943  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:02.460042  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:02.460125  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:02.460199  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	I1019 12:07:02.460259  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:02.460414  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	I1019 12:07:02.559200  149430 ssh_runner.go:195] Run: systemctl --version
	I1019 12:07:02.564930  149430 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:07:02.718107  149430 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:07:02.725060  149430 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:07:02.725137  149430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:07:02.743793  149430 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 12:07:02.743817  149430 start.go:495] detecting cgroup driver to use...
	I1019 12:07:02.743900  149430 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:07:02.762043  149430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:07:02.778517  149430 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:07:02.778605  149430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:07:02.795802  149430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:07:02.811576  149430 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:07:02.944001  149430 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:07:03.143253  149430 docker.go:234] disabling docker service ...
	I1019 12:07:03.143338  149430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:07:03.162796  149430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:07:03.177190  149430 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:07:03.326522  149430 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:07:03.468348  149430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:07:03.484712  149430 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:07:03.505754  149430 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:07:03.505823  149430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:07:03.517032  149430 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 12:07:03.517133  149430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:07:03.528321  149430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:07:03.540907  149430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:07:03.552320  149430 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:07:03.564466  149430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:07:03.575687  149430 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:07:03.595035  149430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:07:03.606666  149430 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:07:03.616336  149430 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1019 12:07:03.616388  149430 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1019 12:07:03.636084  149430 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:07:03.650565  149430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:07:03.785302  149430 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:07:03.884561  149430 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:07:03.884660  149430 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:07:03.889859  149430 start.go:563] Will wait 60s for crictl version
	I1019 12:07:03.889937  149430 ssh_runner.go:195] Run: which crictl
	I1019 12:07:03.893804  149430 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1019 12:07:03.929339  149430 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1019 12:07:03.929498  149430 ssh_runner.go:195] Run: crio --version
	I1019 12:07:03.959461  149430 ssh_runner.go:195] Run: crio --version
	I1019 12:07:03.987977  149430 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1019 12:07:03.988989  149430 main.go:141] libmachine: (addons-360741) Calling .GetIP
	I1019 12:07:03.992163  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:03.992580  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:03.992609  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:03.992892  149430 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1019 12:07:03.997007  149430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:07:04.011476  149430 kubeadm.go:883] updating cluster {Name:addons-360741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-360741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:07:04.011616  149430 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:07:04.011689  149430 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:07:04.044049  149430 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1019 12:07:04.044117  149430 ssh_runner.go:195] Run: which lz4
	I1019 12:07:04.048119  149430 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1019 12:07:04.052501  149430 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1019 12:07:04.052534  149430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1019 12:07:05.328372  149430 crio.go:462] duration metric: took 1.280292715s to copy over tarball
	I1019 12:07:05.328471  149430 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1019 12:07:06.884511  149430 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.556003163s)
	I1019 12:07:06.884553  149430 crio.go:469] duration metric: took 1.556151621s to extract the tarball
	I1019 12:07:06.884564  149430 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1019 12:07:06.924772  149430 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:07:06.968472  149430 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:07:06.968499  149430 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:07:06.968507  149430 kubeadm.go:934] updating node { 192.168.39.35 8443 v1.34.1 crio true true} ...
	I1019 12:07:06.968659  149430 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-360741 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.35
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-360741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:07:06.968732  149430 ssh_runner.go:195] Run: crio config
	I1019 12:07:07.013245  149430 cni.go:84] Creating CNI manager for ""
	I1019 12:07:07.013268  149430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 12:07:07.013303  149430 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 12:07:07.013326  149430 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.35 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-360741 NodeName:addons-360741 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.35"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.35 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:07:07.013473  149430 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.35
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-360741"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.35"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.35"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:07:07.013541  149430 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:07:07.024732  149430 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:07:07.024814  149430 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:07:07.035428  149430 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1019 12:07:07.053568  149430 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:07:07.071579  149430 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1019 12:07:07.089704  149430 ssh_runner.go:195] Run: grep 192.168.39.35	control-plane.minikube.internal$ /etc/hosts
	I1019 12:07:07.093309  149430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.35	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:07:07.106345  149430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:07:07.241979  149430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:07:07.277087  149430 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741 for IP: 192.168.39.35
	I1019 12:07:07.277114  149430 certs.go:195] generating shared ca certs ...
	I1019 12:07:07.277140  149430 certs.go:227] acquiring lock for ca certs: {Name:mk3746b9a64228b33b458f684a19c91de0767499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:07:07.277796  149430 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-144655/.minikube/ca.key
	I1019 12:07:07.515168  149430 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-144655/.minikube/ca.crt ...
	I1019 12:07:07.515197  149430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-144655/.minikube/ca.crt: {Name:mk2c8890c025cedef311592ea1aa23da11835aff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:07:07.515364  149430 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-144655/.minikube/ca.key ...
	I1019 12:07:07.515377  149430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-144655/.minikube/ca.key: {Name:mkf41810299c13280b25300d915de2e38c7595d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:07:07.515915  149430 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-144655/.minikube/proxy-client-ca.key
	I1019 12:07:08.025771  149430 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-144655/.minikube/proxy-client-ca.crt ...
	I1019 12:07:08.025811  149430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-144655/.minikube/proxy-client-ca.crt: {Name:mkbda4b57b9e1b3a8c1c0e8dddc13dd5328cb7f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:07:08.026015  149430 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-144655/.minikube/proxy-client-ca.key ...
	I1019 12:07:08.026034  149430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-144655/.minikube/proxy-client-ca.key: {Name:mkd9478d8aa4e737ce4677a8516a7e79ff1043f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:07:08.026138  149430 certs.go:257] generating profile certs ...
	I1019 12:07:08.026230  149430 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.key
	I1019 12:07:08.026262  149430 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt with IP's: []
	I1019 12:07:08.553944  149430 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt ...
	I1019 12:07:08.553980  149430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: {Name:mk01358de14b1ba127df2b422e7839b89509dda8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:07:08.554839  149430 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.key ...
	I1019 12:07:08.554867  149430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.key: {Name:mk69915ea39d1231a5a8527bb6d9b6aca68636c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:07:08.554990  149430 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/apiserver.key.35cef6fd
	I1019 12:07:08.555016  149430 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/apiserver.crt.35cef6fd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.35]
	I1019 12:07:08.607803  149430 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/apiserver.crt.35cef6fd ...
	I1019 12:07:08.607828  149430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/apiserver.crt.35cef6fd: {Name:mkdb3d734ea61ebbcdba4197ebb5342b22e0ee7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:07:08.608465  149430 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/apiserver.key.35cef6fd ...
	I1019 12:07:08.608484  149430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/apiserver.key.35cef6fd: {Name:mk07c5423daab29af8d8eb91e18296c4aa49bdad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:07:08.608556  149430 certs.go:382] copying /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/apiserver.crt.35cef6fd -> /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/apiserver.crt
	I1019 12:07:08.608647  149430 certs.go:386] copying /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/apiserver.key.35cef6fd -> /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/apiserver.key
	I1019 12:07:08.608700  149430 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/proxy-client.key
	I1019 12:07:08.608719  149430 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/proxy-client.crt with IP's: []
	I1019 12:07:08.705348  149430 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/proxy-client.crt ...
	I1019 12:07:08.705366  149430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/proxy-client.crt: {Name:mk11d29a049ad48ef746c6282c80e7a9862cf7fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:07:08.705480  149430 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/proxy-client.key ...
	I1019 12:07:08.705492  149430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/proxy-client.key: {Name:mk29c9d64ce4122f5ccf5a3ac868b7952eccec99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:07:08.705644  149430 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:07:08.705678  149430 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca.pem (1078 bytes)
	I1019 12:07:08.705702  149430 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:07:08.705728  149430 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/key.pem (1675 bytes)
	I1019 12:07:08.706374  149430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:07:08.734949  149430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:07:08.761332  149430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:07:08.787243  149430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:07:08.814266  149430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1019 12:07:08.846556  149430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 12:07:08.877012  149430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:07:08.906814  149430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 12:07:08.934949  149430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:07:08.962133  149430 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:07:08.980874  149430 ssh_runner.go:195] Run: openssl version
	I1019 12:07:08.987042  149430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:07:08.999334  149430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:07:09.004083  149430 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:07 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:07:09.004140  149430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:07:09.011047  149430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:07:09.023673  149430 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:07:09.028349  149430 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 12:07:09.028411  149430 kubeadm.go:400] StartCluster: {Name:addons-360741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-360741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:07:09.028505  149430 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:07:09.028561  149430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:07:09.065059  149430 cri.go:89] found id: ""
	I1019 12:07:09.065140  149430 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:07:09.076500  149430 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 12:07:09.087128  149430 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 12:07:09.097948  149430 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 12:07:09.097972  149430 kubeadm.go:157] found existing configuration files:
	
	I1019 12:07:09.098024  149430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 12:07:09.108649  149430 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 12:07:09.108712  149430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 12:07:09.120109  149430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 12:07:09.130246  149430 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 12:07:09.130323  149430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 12:07:09.141074  149430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 12:07:09.151049  149430 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 12:07:09.151110  149430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 12:07:09.162056  149430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 12:07:09.171852  149430 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 12:07:09.171945  149430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 12:07:09.182923  149430 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1019 12:07:09.317700  149430 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 12:07:21.000437  149430 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1019 12:07:21.000545  149430 kubeadm.go:318] [preflight] Running pre-flight checks
	I1019 12:07:21.000621  149430 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 12:07:21.000707  149430 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 12:07:21.000806  149430 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 12:07:21.000900  149430 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 12:07:21.072850  149430 out.go:252]   - Generating certificates and keys ...
	I1019 12:07:21.072975  149430 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1019 12:07:21.073073  149430 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1019 12:07:21.073197  149430 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 12:07:21.073297  149430 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1019 12:07:21.073384  149430 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1019 12:07:21.073466  149430 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1019 12:07:21.073548  149430 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1019 12:07:21.073719  149430 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-360741 localhost] and IPs [192.168.39.35 127.0.0.1 ::1]
	I1019 12:07:21.073795  149430 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1019 12:07:21.073961  149430 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-360741 localhost] and IPs [192.168.39.35 127.0.0.1 ::1]
	I1019 12:07:21.074047  149430 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 12:07:21.074133  149430 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 12:07:21.074225  149430 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1019 12:07:21.074402  149430 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 12:07:21.074490  149430 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 12:07:21.074562  149430 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 12:07:21.074634  149430 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 12:07:21.074718  149430 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 12:07:21.074784  149430 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 12:07:21.074853  149430 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 12:07:21.074907  149430 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 12:07:21.114991  149430 out.go:252]   - Booting up control plane ...
	I1019 12:07:21.115099  149430 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 12:07:21.115175  149430 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 12:07:21.115252  149430 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 12:07:21.115421  149430 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 12:07:21.115573  149430 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 12:07:21.115723  149430 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 12:07:21.115866  149430 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 12:07:21.115932  149430 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1019 12:07:21.116098  149430 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 12:07:21.116241  149430 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 12:07:21.116353  149430 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.88467ms
	I1019 12:07:21.116490  149430 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 12:07:21.116606  149430 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.35:8443/livez
	I1019 12:07:21.116755  149430 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 12:07:21.116881  149430 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 12:07:21.116955  149430 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.202385538s
	I1019 12:07:21.117024  149430 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.619929008s
	I1019 12:07:21.117115  149430 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.501578448s
	I1019 12:07:21.117270  149430 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 12:07:21.117440  149430 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 12:07:21.117532  149430 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 12:07:21.117728  149430 kubeadm.go:318] [mark-control-plane] Marking the node addons-360741 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 12:07:21.117817  149430 kubeadm.go:318] [bootstrap-token] Using token: pilg4c.vkvn25sau26mrqb7
	I1019 12:07:21.176732  149430 out.go:252]   - Configuring RBAC rules ...
	I1019 12:07:21.176848  149430 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 12:07:21.176915  149430 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 12:07:21.177060  149430 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 12:07:21.177228  149430 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 12:07:21.177397  149430 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 12:07:21.177512  149430 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 12:07:21.177687  149430 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 12:07:21.177754  149430 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1019 12:07:21.177840  149430 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1019 12:07:21.177857  149430 kubeadm.go:318] 
	I1019 12:07:21.177944  149430 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1019 12:07:21.177955  149430 kubeadm.go:318] 
	I1019 12:07:21.178055  149430 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1019 12:07:21.178064  149430 kubeadm.go:318] 
	I1019 12:07:21.178103  149430 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1019 12:07:21.178185  149430 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 12:07:21.178252  149430 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 12:07:21.178265  149430 kubeadm.go:318] 
	I1019 12:07:21.178361  149430 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1019 12:07:21.178372  149430 kubeadm.go:318] 
	I1019 12:07:21.178426  149430 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 12:07:21.178440  149430 kubeadm.go:318] 
	I1019 12:07:21.178502  149430 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1019 12:07:21.178610  149430 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 12:07:21.178713  149430 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 12:07:21.178721  149430 kubeadm.go:318] 
	I1019 12:07:21.178827  149430 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 12:07:21.178934  149430 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1019 12:07:21.178949  149430 kubeadm.go:318] 
	I1019 12:07:21.179063  149430 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token pilg4c.vkvn25sau26mrqb7 \
	I1019 12:07:21.179160  149430 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a0da8c38f34b8d11a9eb86a37f0f4b9c1c1ee2dfd6848a1a5987ddbafe36a3d4 \
	I1019 12:07:21.179186  149430 kubeadm.go:318] 	--control-plane 
	I1019 12:07:21.179192  149430 kubeadm.go:318] 
	I1019 12:07:21.179261  149430 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1019 12:07:21.179267  149430 kubeadm.go:318] 
	I1019 12:07:21.179394  149430 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token pilg4c.vkvn25sau26mrqb7 \
	I1019 12:07:21.179565  149430 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a0da8c38f34b8d11a9eb86a37f0f4b9c1c1ee2dfd6848a1a5987ddbafe36a3d4 
	I1019 12:07:21.179581  149430 cni.go:84] Creating CNI manager for ""
	I1019 12:07:21.179590  149430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 12:07:21.261337  149430 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1019 12:07:21.310504  149430 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1019 12:07:21.322724  149430 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1019 12:07:21.343184  149430 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 12:07:21.343265  149430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:07:21.343321  149430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-360741 minikube.k8s.io/updated_at=2025_10_19T12_07_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99 minikube.k8s.io/name=addons-360741 minikube.k8s.io/primary=true
	I1019 12:07:21.395615  149430 ops.go:34] apiserver oom_adj: -16
	I1019 12:07:21.493240  149430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:07:21.993506  149430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:07:22.494318  149430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:07:22.993997  149430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:07:23.493478  149430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:07:23.994154  149430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:07:24.493589  149430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:07:24.994159  149430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:07:25.071922  149430 kubeadm.go:1113] duration metric: took 3.728722071s to wait for elevateKubeSystemPrivileges
	I1019 12:07:25.071972  149430 kubeadm.go:402] duration metric: took 16.043565512s to StartCluster
	I1019 12:07:25.071996  149430 settings.go:142] acquiring lock: {Name:mke60a3280e21298abca03691052cdadefc62fa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:07:25.072124  149430 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-144655/kubeconfig
	I1019 12:07:25.072557  149430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-144655/kubeconfig: {Name:mka451e8e94291f8682e25e26bb194afdfe90331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:07:25.072764  149430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 12:07:25.072774  149430 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:07:25.072863  149430 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1019 12:07:25.072991  149430 addons.go:69] Setting yakd=true in profile "addons-360741"
	I1019 12:07:25.073009  149430 addons.go:238] Setting addon yakd=true in "addons-360741"
	I1019 12:07:25.073008  149430 addons.go:69] Setting cloud-spanner=true in profile "addons-360741"
	I1019 12:07:25.073025  149430 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-360741"
	I1019 12:07:25.073037  149430 addons.go:238] Setting addon cloud-spanner=true in "addons-360741"
	I1019 12:07:25.073039  149430 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-360741"
	I1019 12:07:25.073049  149430 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-360741"
	I1019 12:07:25.073044  149430 addons.go:69] Setting metrics-server=true in profile "addons-360741"
	I1019 12:07:25.073074  149430 addons.go:69] Setting registry-creds=true in profile "addons-360741"
	I1019 12:07:25.073075  149430 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-360741"
	I1019 12:07:25.073012  149430 config.go:182] Loaded profile config "addons-360741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:07:25.073078  149430 addons.go:69] Setting ingress-dns=true in profile "addons-360741"
	I1019 12:07:25.073093  149430 addons.go:69] Setting inspektor-gadget=true in profile "addons-360741"
	I1019 12:07:25.073101  149430 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-360741"
	I1019 12:07:25.073106  149430 addons.go:238] Setting addon inspektor-gadget=true in "addons-360741"
	I1019 12:07:25.073110  149430 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-360741"
	I1019 12:07:25.073118  149430 addons.go:69] Setting volcano=true in profile "addons-360741"
	I1019 12:07:25.073128  149430 host.go:66] Checking if "addons-360741" exists ...
	I1019 12:07:25.073144  149430 host.go:66] Checking if "addons-360741" exists ...
	I1019 12:07:25.073059  149430 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-360741"
	I1019 12:07:25.073243  149430 host.go:66] Checking if "addons-360741" exists ...
	I1019 12:07:25.073079  149430 addons.go:69] Setting gcp-auth=true in profile "addons-360741"
	I1019 12:07:25.073354  149430 mustload.go:65] Loading cluster: addons-360741
	I1019 12:07:25.073078  149430 host.go:66] Checking if "addons-360741" exists ...
	I1019 12:07:25.073541  149430 config.go:182] Loaded profile config "addons-360741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:07:25.073127  149430 addons.go:69] Setting volumesnapshots=true in profile "addons-360741"
	I1019 12:07:25.073648  149430 addons.go:238] Setting addon volumesnapshots=true in "addons-360741"
	I1019 12:07:25.073686  149430 host.go:66] Checking if "addons-360741" exists ...
	I1019 12:07:25.073719  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.073734  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.073759  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.073772  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.073802  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.073118  149430 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-360741"
	I1019 12:07:25.073735  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.073869  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.073894  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.073901  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.073107  149430 addons.go:238] Setting addon ingress-dns=true in "addons-360741"
	I1019 12:07:25.073971  149430 host.go:66] Checking if "addons-360741" exists ...
	I1019 12:07:25.074069  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.074100  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.073772  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.074270  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.074319  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.074369  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.074401  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.073064  149430 host.go:66] Checking if "addons-360741" exists ...
	I1019 12:07:25.073042  149430 host.go:66] Checking if "addons-360741" exists ...
	I1019 12:07:25.073066  149430 addons.go:69] Setting registry=true in profile "addons-360741"
	I1019 12:07:25.074885  149430 addons.go:238] Setting addon registry=true in "addons-360741"
	I1019 12:07:25.074915  149430 host.go:66] Checking if "addons-360741" exists ...
	I1019 12:07:25.075019  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.075046  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.073083  149430 addons.go:238] Setting addon metrics-server=true in "addons-360741"
	I1019 12:07:25.075306  149430 out.go:179] * Verifying Kubernetes components...
	I1019 12:07:25.075324  149430 host.go:66] Checking if "addons-360741" exists ...
	I1019 12:07:25.073105  149430 addons.go:69] Setting default-storageclass=true in profile "addons-360741"
	I1019 12:07:25.075531  149430 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-360741"
	I1019 12:07:25.073117  149430 addons.go:69] Setting ingress=true in profile "addons-360741"
	I1019 12:07:25.075676  149430 addons.go:238] Setting addon ingress=true in "addons-360741"
	I1019 12:07:25.075721  149430 host.go:66] Checking if "addons-360741" exists ...
	I1019 12:07:25.073130  149430 addons.go:238] Setting addon volcano=true in "addons-360741"
	I1019 12:07:25.075994  149430 host.go:66] Checking if "addons-360741" exists ...
	I1019 12:07:25.073091  149430 addons.go:69] Setting storage-provisioner=true in profile "addons-360741"
	I1019 12:07:25.076057  149430 addons.go:238] Setting addon storage-provisioner=true in "addons-360741"
	I1019 12:07:25.076094  149430 host.go:66] Checking if "addons-360741" exists ...
	I1019 12:07:25.073085  149430 addons.go:238] Setting addon registry-creds=true in "addons-360741"
	I1019 12:07:25.076257  149430 host.go:66] Checking if "addons-360741" exists ...
	I1019 12:07:25.076606  149430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:07:25.083872  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.083929  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.084510  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.084543  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.086704  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.086743  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.086794  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.086829  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.087250  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.087294  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.087639  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.087669  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.088150  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.088180  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.090767  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.090810  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.099888  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42681
	I1019 12:07:25.105020  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.105411  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36567
	I1019 12:07:25.105752  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.105775  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.106332  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.107011  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.107059  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.107470  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.108093  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.108124  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.108650  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.109477  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.109513  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.112399  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35905
	I1019 12:07:25.112700  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44439
	I1019 12:07:25.113223  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.113728  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.113743  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.114151  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.114323  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33237
	I1019 12:07:25.114736  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I1019 12:07:25.114927  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.115013  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.115513  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.116329  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.116354  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.116763  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.116778  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34419
	I1019 12:07:25.117194  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.117254  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.117269  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.117713  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.117784  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.117823  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.117860  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.118395  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.118426  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.121588  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.121719  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.121738  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.122163  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.123276  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33363
	I1019 12:07:25.123857  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.124340  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.124358  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.124736  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.124752  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.124819  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.125361  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.127796  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.127840  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.128030  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35715
	I1019 12:07:25.129732  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.129991  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.130334  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:25.130409  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.130841  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.130877  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.131262  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.132105  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:25.132850  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42297
	I1019 12:07:25.135414  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.136058  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.136124  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.136908  149430 host.go:66] Checking if "addons-360741" exists ...
	I1019 12:07:25.137349  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.137407  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.138973  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42083
	I1019 12:07:25.139465  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.140104  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.140143  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.144173  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37207
	I1019 12:07:25.144351  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I1019 12:07:25.144474  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I1019 12:07:25.144580  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.145615  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.145632  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.145987  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.146042  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.146531  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.146552  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.146971  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.147006  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.147611  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.147785  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:25.149364  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.150637  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.150879  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.150924  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.151361  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:25.151398  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.151538  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36621
	I1019 12:07:25.152068  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.152115  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.152726  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
	I1019 12:07:25.152931  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.152945  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.153457  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.153885  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.153910  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.153611  149430 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-360741"
	I1019 12:07:25.153993  149430 host.go:66] Checking if "addons-360741" exists ...
	I1019 12:07:25.154436  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.154470  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.155497  149430 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1019 12:07:25.155719  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.155770  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.155940  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42057
	I1019 12:07:25.156516  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:25.156808  149430 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1019 12:07:25.156829  149430 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1019 12:07:25.156864  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:25.157759  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.157913  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.157841  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.158383  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:25.159237  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.159850  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41533
	I1019 12:07:25.160882  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.160926  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.163043  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44991
	I1019 12:07:25.163057  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.163156  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43485
	I1019 12:07:25.163731  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:25.164011  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:25.164206  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.164225  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.165020  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.165114  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.165187  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
	I1019 12:07:25.165329  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.165436  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.165676  149430 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1019 12:07:25.165763  149430 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1019 12:07:25.165956  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:25.166035  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.166051  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.166317  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.166363  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.166709  149430 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1019 12:07:25.166732  149430 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1019 12:07:25.166755  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:25.166835  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.166854  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.167175  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.167188  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.167588  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:25.167918  149430 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1019 12:07:25.168079  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.168180  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.168213  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.169039  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.169566  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.169580  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.170322  149430 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1019 12:07:25.170542  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.170643  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.171886  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.172504  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.172539  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.173026  149430 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1019 12:07:25.174122  149430 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1019 12:07:25.174632  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33851
	I1019 12:07:25.175342  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.176232  149430 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1019 12:07:25.176690  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.176708  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.178153  149430 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1019 12:07:25.178406  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.178433  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.178668  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:25.178726  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:25.179293  149430 addons.go:238] Setting addon default-storageclass=true in "addons-360741"
	I1019 12:07:25.179338  149430 host.go:66] Checking if "addons-360741" exists ...
	I1019 12:07:25.179722  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.179774  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.180070  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:25.180144  149430 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1019 12:07:25.180496  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.181176  149430 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1019 12:07:25.181198  149430 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1019 12:07:25.181220  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:25.182041  149430 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1019 12:07:25.182348  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.182384  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:25.182602  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:25.182755  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:25.182932  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:25.182928  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:25.182978  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.183017  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	I1019 12:07:25.183344  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:25.183723  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:25.183902  149430 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1019 12:07:25.183980  149430 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1019 12:07:25.184001  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:25.184086  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:25.184238  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	I1019 12:07:25.184445  149430 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1019 12:07:25.184847  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40871
	I1019 12:07:25.185735  149430 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1019 12:07:25.185754  149430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1019 12:07:25.185771  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:25.191649  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.193007  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.193073  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34719
	I1019 12:07:25.194628  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:25.194626  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:25.194659  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.194656  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.194673  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.194685  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.195184  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.195202  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.195802  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:25.196051  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:25.196248  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40291
	I1019 12:07:25.196355  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.196463  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:25.196609  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:25.196609  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	I1019 12:07:25.196781  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:25.196909  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:25.197043  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:25.197066  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.197132  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	I1019 12:07:25.197234  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.197254  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.197457  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38897
	I1019 12:07:25.197706  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.197743  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:25.197828  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.197985  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:25.198070  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.198147  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:25.198193  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:25.199602  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:25.199628  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.199718  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	I1019 12:07:25.199790  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.199802  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.200068  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38109
	I1019 12:07:25.200313  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.200329  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.200769  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.200770  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.201118  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:25.201124  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.201216  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:25.201330  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:25.201344  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:25.201412  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43525
	I1019 12:07:25.201915  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.201975  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.202497  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:25.202533  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:25.202540  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:25.202547  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:25.202553  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:25.202797  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:25.202817  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:25.202834  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:25.202842  149430 main.go:141] libmachine: () Calling .GetVersion
	W1019 12:07:25.202922  149430 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1019 12:07:25.203428  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:25.204136  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.204155  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.204744  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.204952  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.204977  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.205290  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:25.205356  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.206071  149430 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1019 12:07:25.206376  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:25.207116  149430 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1019 12:07:25.207132  149430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1019 12:07:25.207151  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:25.209433  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:25.209633  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42603
	I1019 12:07:25.209931  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I1019 12:07:25.210195  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.210986  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.211042  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.211347  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45119
	I1019 12:07:25.211394  149430 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1019 12:07:25.211839  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44699
	I1019 12:07:25.211753  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.211768  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.212113  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.212291  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:25.212483  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.212497  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.212666  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.212680  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.212955  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.213073  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.213230  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:25.213367  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:25.213592  149430 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1019 12:07:25.213633  149430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1019 12:07:25.213650  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:25.213831  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:25.214508  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.215264  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.215344  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.215649  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.215856  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:25.215872  149430 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1019 12:07:25.216011  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.216316  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:25.216854  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:25.217147  149430 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1019 12:07:25.217164  149430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1019 12:07:25.217182  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:25.217580  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:25.217611  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.217647  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:25.218247  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:25.218542  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:25.218768  149430 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1019 12:07:25.218838  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:25.219132  149430 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1019 12:07:25.219186  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	I1019 12:07:25.219840  149430 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1019 12:07:25.220653  149430 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1019 12:07:25.220772  149430 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1019 12:07:25.220820  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:25.221175  149430 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 12:07:25.221491  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35119
	I1019 12:07:25.221661  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.221761  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:25.221791  149430 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 12:07:25.222024  149430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1019 12:07:25.222044  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:25.222208  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.222747  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:25.222770  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.223018  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38059
	I1019 12:07:25.223180  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:25.223349  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.223372  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.223607  149430 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:07:25.223669  149430 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 12:07:25.223697  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.223749  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:25.224011  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.224096  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:25.224314  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	I1019 12:07:25.224421  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.224505  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.224779  149430 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 12:07:25.224858  149430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1019 12:07:25.225045  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:25.224779  149430 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:07:25.225181  149430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:07:25.225198  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:25.224961  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.225310  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.225555  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:25.225688  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:25.225691  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:25.226304  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:25.226582  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.227405  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.227619  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:25.228083  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:25.228337  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:25.228637  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	I1019 12:07:25.229318  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:25.229433  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.229796  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:25.229970  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:25.230122  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:25.230324  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	I1019 12:07:25.230548  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:25.232122  149430 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1019 12:07:25.232333  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.232794  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:25.232996  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.232937  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.233020  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36587
	I1019 12:07:25.233214  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.233377  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:25.233559  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:25.233613  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.233698  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:25.233713  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.233731  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:25.233740  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:25.233941  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:25.233951  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	I1019 12:07:25.234122  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:25.233751  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.234240  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:25.234265  149430 out.go:179]   - Using image docker.io/registry:3.0.0
	I1019 12:07:25.234296  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.234336  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.234375  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:25.234351  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:25.234536  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:25.234545  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	I1019 12:07:25.234704  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	I1019 12:07:25.234740  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.234894  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:25.235382  149430 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1019 12:07:25.235403  149430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1019 12:07:25.235417  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:25.236795  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:25.238065  149430 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1019 12:07:25.238887  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.239322  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:25.239346  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.239537  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:25.239688  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:25.239851  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:25.239956  149430 out.go:179]   - Using image docker.io/busybox:stable
	I1019 12:07:25.240019  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	I1019 12:07:25.241078  149430 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1019 12:07:25.241095  149430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1019 12:07:25.241109  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:25.244088  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.244515  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:25.244544  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.244724  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:25.244887  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:25.245057  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:25.245211  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	I1019 12:07:25.245301  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35471
	I1019 12:07:25.245712  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:25.246292  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:25.246330  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:25.246680  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:25.246882  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:25.248422  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:25.248637  149430 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:07:25.248653  149430 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:07:25.248668  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:25.253009  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:25.253014  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.253070  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:25.253094  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:25.253246  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:25.253475  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:25.253622  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	W1019 12:07:25.453256  149430 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36880->192.168.39.35:22: read: connection reset by peer
	I1019 12:07:25.453351  149430 retry.go:31] will retry after 339.037093ms: ssh: handshake failed: read tcp 192.168.39.1:36880->192.168.39.35:22: read: connection reset by peer
	W1019 12:07:25.453457  149430 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36896->192.168.39.35:22: read: connection reset by peer
	I1019 12:07:25.453471  149430 retry.go:31] will retry after 327.94484ms: ssh: handshake failed: read tcp 192.168.39.1:36896->192.168.39.35:22: read: connection reset by peer
	I1019 12:07:25.617142  149430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 12:07:25.617152  149430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:07:25.671951  149430 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1019 12:07:25.671975  149430 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1019 12:07:25.761459  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 12:07:25.822338  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1019 12:07:25.898157  149430 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1019 12:07:25.898182  149430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1019 12:07:25.899948  149430 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1019 12:07:25.899969  149430 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1019 12:07:25.923825  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1019 12:07:25.929256  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1019 12:07:25.946480  149430 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:07:25.946502  149430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1019 12:07:25.951447  149430 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1019 12:07:25.951473  149430 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1019 12:07:25.962931  149430 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1019 12:07:25.962950  149430 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1019 12:07:26.009604  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 12:07:26.027408  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1019 12:07:26.044597  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:07:26.055728  149430 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1019 12:07:26.055751  149430 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1019 12:07:26.127477  149430 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1019 12:07:26.127506  149430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1019 12:07:26.146323  149430 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1019 12:07:26.146357  149430 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1019 12:07:26.205050  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:07:26.259643  149430 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1019 12:07:26.259676  149430 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1019 12:07:26.279899  149430 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1019 12:07:26.279920  149430 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1019 12:07:26.335934  149430 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1019 12:07:26.335960  149430 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1019 12:07:26.437484  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1019 12:07:26.437659  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:07:26.493992  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1019 12:07:26.513506  149430 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1019 12:07:26.513538  149430 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1019 12:07:26.524293  149430 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 12:07:26.524314  149430 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1019 12:07:26.609770  149430 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1019 12:07:26.609796  149430 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1019 12:07:26.623770  149430 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1019 12:07:26.623793  149430 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1019 12:07:26.736073  149430 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 12:07:26.736098  149430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1019 12:07:26.847018  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 12:07:26.896212  149430 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1019 12:07:26.896235  149430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1019 12:07:26.976376  149430 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1019 12:07:26.976404  149430 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1019 12:07:27.000628  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 12:07:27.304166  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1019 12:07:27.315602  149430 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1019 12:07:27.315628  149430 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1019 12:07:27.899382  149430 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1019 12:07:27.899409  149430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1019 12:07:28.678191  149430 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1019 12:07:28.678215  149430 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1019 12:07:28.687153  149430 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.069977575s)
	I1019 12:07:28.687181  149430 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1019 12:07:28.687243  149430 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.070021724s)
	I1019 12:07:28.688363  149430 node_ready.go:35] waiting up to 6m0s for node "addons-360741" to be "Ready" ...
	I1019 12:07:28.743925  149430 node_ready.go:49] node "addons-360741" is "Ready"
	I1019 12:07:28.743967  149430 node_ready.go:38] duration metric: took 55.105658ms for node "addons-360741" to be "Ready" ...
	I1019 12:07:28.743986  149430 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:07:28.744047  149430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:07:29.234273  149430 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-360741" context rescaled to 1 replicas
	I1019 12:07:29.263675  149430 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1019 12:07:29.263699  149430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1019 12:07:29.715302  149430 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1019 12:07:29.715339  149430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1019 12:07:30.092320  149430 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1019 12:07:30.092347  149430 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1019 12:07:30.386131  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1019 12:07:32.647323  149430 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1019 12:07:32.647369  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:32.651558  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:32.652106  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:32.652141  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:32.652400  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:32.652587  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:32.652783  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:32.652940  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	I1019 12:07:32.851699  149430 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1019 12:07:32.907377  149430 addons.go:238] Setting addon gcp-auth=true in "addons-360741"
	I1019 12:07:32.907463  149430 host.go:66] Checking if "addons-360741" exists ...
	I1019 12:07:32.907922  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:32.907985  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:32.922330  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34757
	I1019 12:07:32.922917  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:32.923506  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:32.923537  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:32.923898  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:32.924382  149430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:07:32.924420  149430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:07:32.938357  149430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39025
	I1019 12:07:32.938916  149430 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:07:32.939558  149430 main.go:141] libmachine: Using API Version  1
	I1019 12:07:32.939585  149430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:07:32.940036  149430 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:07:32.940247  149430 main.go:141] libmachine: (addons-360741) Calling .GetState
	I1019 12:07:32.942091  149430 main.go:141] libmachine: (addons-360741) Calling .DriverName
	I1019 12:07:32.942315  149430 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1019 12:07:32.942343  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHHostname
	I1019 12:07:32.945478  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:32.945953  149430 main.go:141] libmachine: (addons-360741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:80:77", ip: ""} in network mk-addons-360741: {Iface:virbr1 ExpiryTime:2025-10-19 13:07:00 +0000 UTC Type:0 Mac:52:54:00:04:80:77 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-360741 Clientid:01:52:54:00:04:80:77}
	I1019 12:07:32.945984  149430 main.go:141] libmachine: (addons-360741) DBG | domain addons-360741 has defined IP address 192.168.39.35 and MAC address 52:54:00:04:80:77 in network mk-addons-360741
	I1019 12:07:32.946191  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHPort
	I1019 12:07:32.946381  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHKeyPath
	I1019 12:07:32.946540  149430 main.go:141] libmachine: (addons-360741) Calling .GetSSHUsername
	I1019 12:07:32.946711  149430 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/addons-360741/id_rsa Username:docker}
	I1019 12:07:33.606891  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.845388083s)
	I1019 12:07:33.606971  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.606987  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.606987  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.784614792s)
	I1019 12:07:33.607031  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.607047  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.607119  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.683269268s)
	I1019 12:07:33.607148  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.607159  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.607181  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.67789339s)
	I1019 12:07:33.607209  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.607222  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.607264  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.597598025s)
	I1019 12:07:33.607313  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.607324  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.607315  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.579884744s)
	I1019 12:07:33.607375  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.562749878s)
	I1019 12:07:33.607400  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.607410  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.607380  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.607431  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.607478  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.402398538s)
	I1019 12:07:33.607492  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.607499  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.607505  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.607519  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.607518  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.607533  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.607535  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.607535  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.607540  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.607544  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.607482  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.607553  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.607560  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.607544  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.607604  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.607625  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.16994669s)
	I1019 12:07:33.607585  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.170076223s)
	I1019 12:07:33.607649  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.607658  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.607649  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.607687  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.607691  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.11366598s)
	I1019 12:07:33.607707  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.607717  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.607859  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.760810949s)
	I1019 12:07:33.607879  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.607888  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.608020  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.60735986s)
	W1019 12:07:33.608052  149430 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1019 12:07:33.608075  149430 retry.go:31] will retry after 197.708741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1019 12:07:33.608136  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.303926227s)
	I1019 12:07:33.608158  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.608169  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.608253  149430 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.864189714s)
	I1019 12:07:33.608271  149430 api_server.go:72] duration metric: took 8.535477483s to wait for apiserver process to appear ...
	I1019 12:07:33.608293  149430 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:07:33.608339  149430 api_server.go:253] Checking apiserver healthz at https://192.168.39.35:8443/healthz ...
	I1019 12:07:33.608731  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.608743  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.608754  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.608762  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.608826  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.608848  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.608853  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.609178  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.609212  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.611003  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.611031  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.611039  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.610206  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.610237  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.611102  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.611308  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.611309  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.611364  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.610309  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.610334  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.611456  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.611466  149430 addons.go:479] Verifying addon ingress=true in "addons-360741"
	I1019 12:07:33.611470  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.611480  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.610349  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.610354  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.612231  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.612242  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.612254  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.610375  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.612359  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.612372  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.612380  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.610390  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.610402  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.610408  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.612445  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.612461  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.612468  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.612753  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.612787  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.610425  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.612823  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.612826  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.612834  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.612844  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.610431  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.612865  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.612875  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.612883  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.612835  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.612898  149430 addons.go:479] Verifying addon metrics-server=true in "addons-360741"
	I1019 12:07:33.610440  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.610468  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.613234  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.613245  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.613252  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.614984  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.615003  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.610469  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.610487  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.615276  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.615536  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.615544  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.615555  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.615558  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.615577  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.615582  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.615546  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.610271  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.615636  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.615515  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.615655  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.615661  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.615667  149430 addons.go:479] Verifying addon registry=true in "addons-360741"
	I1019 12:07:33.610450  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	W1019 12:07:33.607515  149430 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:33.615966  149430 retry.go:31] will retry after 292.726653ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:33.615393  149430 out.go:179] * Verifying ingress addon...
	I1019 12:07:33.616686  149430 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-360741 service yakd-dashboard -n yakd-dashboard
	
	I1019 12:07:33.616084  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.616109  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.617206  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.616123  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.616127  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.617277  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.618410  149430 out.go:179] * Verifying registry addon...
	I1019 12:07:33.619381  149430 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1019 12:07:33.620025  149430 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1019 12:07:33.657336  149430 api_server.go:279] https://192.168.39.35:8443/healthz returned 200:
	ok
	I1019 12:07:33.660995  149430 api_server.go:141] control plane version: v1.34.1
	I1019 12:07:33.661030  149430 api_server.go:131] duration metric: took 52.721337ms to wait for apiserver health ...
	I1019 12:07:33.661060  149430 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:07:33.672904  149430 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1019 12:07:33.672924  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:33.675173  149430 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1019 12:07:33.675198  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:33.701697  149430 system_pods.go:59] 16 kube-system pods found
	I1019 12:07:33.701754  149430 system_pods.go:61] "amd-gpu-device-plugin-s28qs" [18e690aa-169f-4ed1-afd6-03256ec9b7e6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1019 12:07:33.701761  149430 system_pods.go:61] "coredns-66bc5c9577-kd585" [5963a053-34ba-4523-9c9c-bb9ed7e3e9b0] Running
	I1019 12:07:33.701766  149430 system_pods.go:61] "etcd-addons-360741" [e53f6189-3efb-40f7-aac6-8499d0117194] Running
	I1019 12:07:33.701769  149430 system_pods.go:61] "kube-apiserver-addons-360741" [66287f89-94dc-437d-a2f2-df650d20551e] Running
	I1019 12:07:33.701773  149430 system_pods.go:61] "kube-controller-manager-addons-360741" [d1116575-2f7b-4371-b072-29210d98d93f] Running
	I1019 12:07:33.701777  149430 system_pods.go:61] "kube-ingress-dns-minikube" [7f6b2cc2-bccb-4e31-9491-81ca4108d468] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:07:33.701784  149430 system_pods.go:61] "kube-proxy-42tdl" [d8dd16ff-e00f-44ee-a3d2-843647158a21] Running
	I1019 12:07:33.701788  149430 system_pods.go:61] "kube-scheduler-addons-360741" [d6f3b564-01ca-4ccb-943e-e855f1098d3f] Running
	I1019 12:07:33.701797  149430 system_pods.go:61] "metrics-server-85b7d694d7-djqc2" [572b8d9c-4d84-47b5-8f49-7478a2d3fbbf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 12:07:33.701805  149430 system_pods.go:61] "nvidia-device-plugin-daemonset-8xnsb" [2247795d-e86a-4366-af44-71e3643b8a20] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 12:07:33.701811  149430 system_pods.go:61] "registry-6b586f9694-w9nbt" [5a256bb4-be22-4253-b273-f54382dd90ea] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 12:07:33.701820  149430 system_pods.go:61] "registry-creds-764b6fb674-krct7" [978d5957-dfb1-4a44-9131-ed6ee927d4c2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:07:33.701825  149430 system_pods.go:61] "registry-proxy-v2zn6" [5d518ee0-0b2c-4d81-8f53-add3c51066b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 12:07:33.701832  149430 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fdl9v" [28f35ba5-ae53-42a2-9402-2c735edd71f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:07:33.701838  149430 system_pods.go:61] "snapshot-controller-7d9fbc56b8-tnrzw" [0f14dee8-741b-493b-b338-413b5743f12f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:07:33.701845  149430 system_pods.go:61] "storage-provisioner" [e2a0cd2b-d30e-4eb4-a6a0-5acae325b054] Running
	I1019 12:07:33.701852  149430 system_pods.go:74] duration metric: took 40.78615ms to wait for pod list to return data ...
	I1019 12:07:33.701861  149430 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:07:33.708792  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.708816  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.709233  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.709260  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	W1019 12:07:33.709387  149430 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1019 12:07:33.755634  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:33.755664  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:33.756000  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:33.756040  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:33.756054  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:33.763147  149430 default_sa.go:45] found service account: "default"
	I1019 12:07:33.763197  149430 default_sa.go:55] duration metric: took 61.327321ms for default service account to be created ...
	I1019 12:07:33.763212  149430 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:07:33.789613  149430 system_pods.go:86] 16 kube-system pods found
	I1019 12:07:33.789655  149430 system_pods.go:89] "amd-gpu-device-plugin-s28qs" [18e690aa-169f-4ed1-afd6-03256ec9b7e6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1019 12:07:33.789664  149430 system_pods.go:89] "coredns-66bc5c9577-kd585" [5963a053-34ba-4523-9c9c-bb9ed7e3e9b0] Running
	I1019 12:07:33.789670  149430 system_pods.go:89] "etcd-addons-360741" [e53f6189-3efb-40f7-aac6-8499d0117194] Running
	I1019 12:07:33.789677  149430 system_pods.go:89] "kube-apiserver-addons-360741" [66287f89-94dc-437d-a2f2-df650d20551e] Running
	I1019 12:07:33.789683  149430 system_pods.go:89] "kube-controller-manager-addons-360741" [d1116575-2f7b-4371-b072-29210d98d93f] Running
	I1019 12:07:33.789691  149430 system_pods.go:89] "kube-ingress-dns-minikube" [7f6b2cc2-bccb-4e31-9491-81ca4108d468] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:07:33.789697  149430 system_pods.go:89] "kube-proxy-42tdl" [d8dd16ff-e00f-44ee-a3d2-843647158a21] Running
	I1019 12:07:33.789706  149430 system_pods.go:89] "kube-scheduler-addons-360741" [d6f3b564-01ca-4ccb-943e-e855f1098d3f] Running
	I1019 12:07:33.789715  149430 system_pods.go:89] "metrics-server-85b7d694d7-djqc2" [572b8d9c-4d84-47b5-8f49-7478a2d3fbbf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 12:07:33.789728  149430 system_pods.go:89] "nvidia-device-plugin-daemonset-8xnsb" [2247795d-e86a-4366-af44-71e3643b8a20] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 12:07:33.789744  149430 system_pods.go:89] "registry-6b586f9694-w9nbt" [5a256bb4-be22-4253-b273-f54382dd90ea] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 12:07:33.789758  149430 system_pods.go:89] "registry-creds-764b6fb674-krct7" [978d5957-dfb1-4a44-9131-ed6ee927d4c2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:07:33.789771  149430 system_pods.go:89] "registry-proxy-v2zn6" [5d518ee0-0b2c-4d81-8f53-add3c51066b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 12:07:33.789781  149430 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fdl9v" [28f35ba5-ae53-42a2-9402-2c735edd71f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:07:33.789793  149430 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tnrzw" [0f14dee8-741b-493b-b338-413b5743f12f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:07:33.789800  149430 system_pods.go:89] "storage-provisioner" [e2a0cd2b-d30e-4eb4-a6a0-5acae325b054] Running
	I1019 12:07:33.789815  149430 system_pods.go:126] duration metric: took 26.593878ms to wait for k8s-apps to be running ...
	I1019 12:07:33.789829  149430 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:07:33.789893  149430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:07:33.806737  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 12:07:33.909881  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:07:34.133141  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:34.133553  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:34.403049  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.016863037s)
	I1019 12:07:34.403123  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:34.403133  149430 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.4607929s)
	I1019 12:07:34.403185  149430 system_svc.go:56] duration metric: took 613.353433ms WaitForService to wait for kubelet
	I1019 12:07:34.403206  149430 kubeadm.go:586] duration metric: took 9.330410415s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:07:34.403140  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:34.403233  149430 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:07:34.403550  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:34.403558  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:34.403567  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:34.403581  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:34.403590  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:34.403835  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:34.403853  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:34.403863  149430 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-360741"
	I1019 12:07:34.403871  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:34.404815  149430 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 12:07:34.405584  149430 out.go:179] * Verifying csi-hostpath-driver addon...
	I1019 12:07:34.406978  149430 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1019 12:07:34.407634  149430 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1019 12:07:34.408000  149430 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1019 12:07:34.408020  149430 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1019 12:07:34.418763  149430 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1019 12:07:34.418783  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:34.433010  149430 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1019 12:07:34.433034  149430 node_conditions.go:123] node cpu capacity is 2
	I1019 12:07:34.433046  149430 node_conditions.go:105] duration metric: took 29.807623ms to run NodePressure ...
	I1019 12:07:34.433061  149430 start.go:241] waiting for startup goroutines ...
	I1019 12:07:34.533124  149430 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1019 12:07:34.533153  149430 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1019 12:07:34.626199  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:34.627073  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:34.645309  149430 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1019 12:07:34.645330  149430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1019 12:07:34.764006  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1019 12:07:34.914542  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:35.125361  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:35.126591  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:35.413565  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:35.625217  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:35.625457  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:35.914461  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:36.026912  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.220129143s)
	I1019 12:07:36.026976  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:36.026992  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:36.027357  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:36.027396  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:36.027410  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:36.027419  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:36.027438  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:36.027725  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:07:36.027781  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:36.027792  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:36.128220  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:36.129967  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:36.441777  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:36.631419  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:36.631856  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:36.842378  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.078325196s)
	I1019 12:07:36.842442  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:36.842461  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:36.842467  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.932541393s)
	W1019 12:07:36.842517  149430 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:36.842555  149430 retry.go:31] will retry after 358.267631ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:36.842803  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:36.842825  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:36.842833  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:07:36.842841  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:07:36.843061  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:07:36.843076  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:07:36.844090  149430 addons.go:479] Verifying addon gcp-auth=true in "addons-360741"
	I1019 12:07:36.845637  149430 out.go:179] * Verifying gcp-auth addon...
	I1019 12:07:36.847661  149430 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1019 12:07:36.853482  149430 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1019 12:07:36.853500  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:36.912358  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:37.128712  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:37.129560  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:37.201708  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:07:37.352581  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:37.413016  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:37.623923  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:37.627039  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:37.852346  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:37.911458  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:38.126454  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:38.130725  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:38.352553  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:38.376248  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.174498808s)
	W1019 12:07:38.376308  149430 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:38.376334  149430 retry.go:31] will retry after 422.287019ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:38.413472  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:38.625678  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:38.627427  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:38.799677  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:07:38.850703  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:38.910742  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:39.125612  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:39.127178  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:39.353608  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:39.413510  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:39.625479  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:39.626440  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:07:39.753647  149430 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:39.753693  149430 retry.go:31] will retry after 707.282932ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:39.855398  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:39.916803  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:40.124741  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:40.124767  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:40.351417  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:40.411469  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:40.461376  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:07:40.623857  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:40.626008  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:40.851266  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:40.911737  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:41.124481  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:41.126575  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:41.353177  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:41.412815  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:41.625618  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:41.627085  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:41.670448  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.209032864s)
	W1019 12:07:41.670484  149430 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:41.670505  149430 retry.go:31] will retry after 1.01442703s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:41.851656  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:41.914746  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:42.124553  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:42.125104  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:42.354379  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:42.412707  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:42.625530  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:42.626625  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:42.685749  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:07:42.853932  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:42.914046  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:43.127198  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:43.129147  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:43.398736  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:43.412346  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:43.626428  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:43.626685  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:43.784172  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.098351481s)
	W1019 12:07:43.784226  149430 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:43.784255  149430 retry.go:31] will retry after 2.04727547s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:43.853542  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:43.914047  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:44.197770  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:44.199991  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:44.352771  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:44.411795  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:44.624494  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:44.627020  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:44.853609  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:44.911112  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:45.123228  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:45.124558  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:45.353034  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:45.412660  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:45.832361  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:07:45.844690  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:45.845384  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:45.852205  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:45.914957  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:46.124490  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:46.125015  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:46.351413  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:46.411363  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:46.623337  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:46.626052  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:07:46.702734  149430 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:46.702769  149430 retry.go:31] will retry after 3.451873446s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:46.851720  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:46.912429  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:47.123074  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:47.124341  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:47.353903  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:47.413450  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:47.895432  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:47.895922  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:47.896107  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:47.911554  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:48.134402  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:48.134645  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:48.514032  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:48.516459  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:48.623863  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:48.624625  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:48.850716  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:48.912985  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:49.127220  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:49.129603  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:49.352101  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:49.414376  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:49.622654  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:49.624155  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:49.851014  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:50.018836  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:50.124897  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:50.125205  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:50.155638  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:07:50.354460  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:50.412361  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:50.625212  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:50.625603  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:50.851457  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:07:50.853873  149430 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:50.853913  149430 retry.go:31] will retry after 6.054717205s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:50.910971  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:51.123930  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:51.124969  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:51.350805  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:51.410651  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:51.623129  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:51.623203  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:51.851351  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:51.911348  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:52.124316  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:52.124548  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:52.353106  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:52.411248  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:52.624158  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:52.624685  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:52.853958  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:52.912340  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:53.124525  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:53.126938  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:53.351867  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:53.411996  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:53.625062  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:53.625087  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:53.851328  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:53.912623  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:54.126416  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:54.126872  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:54.350416  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:54.412909  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:54.628786  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:54.630678  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:54.852965  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:54.915759  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:55.127971  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:55.128866  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:55.351424  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:55.412545  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:55.622482  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:55.623973  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:55.852269  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:55.911188  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:56.124541  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:56.124634  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:56.350835  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:56.412215  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:56.624416  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:56.624734  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:56.851866  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:56.909010  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:07:56.912132  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:57.125038  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:57.128526  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:57.351865  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:57.413132  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:57.625498  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:57.627632  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:07:57.692653  149430 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:57.692703  149430 retry.go:31] will retry after 5.817079816s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:57.851673  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:57.914000  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:58.123219  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:58.123248  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:58.351468  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:58.411525  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:58.622646  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:58.622971  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:58.851414  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:58.953290  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:59.124311  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:59.124481  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:59.351251  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:59.411120  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:59.622973  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:59.623660  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:59.853679  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:59.917580  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:00.124817  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:00.124965  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:00.351090  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:00.412082  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:00.625955  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:00.627233  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:00.851790  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:00.911927  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:01.123313  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:01.123963  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:01.350770  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:01.411082  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:01.623740  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:01.623789  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:01.850866  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:01.911356  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:02.122961  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:02.123865  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:02.351218  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:02.411645  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:02.623126  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:02.625207  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:02.851674  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:02.911102  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:03.123417  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:03.123524  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:03.352597  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:03.413418  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:03.510622  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:08:03.625044  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:03.629674  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:03.851339  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:03.912962  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:04.127154  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:04.127277  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:04.351818  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:04.411845  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:08:04.478615  149430 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:08:04.478658  149430 retry.go:31] will retry after 9.729891153s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:08:04.624008  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:04.625569  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:04.851104  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:04.912479  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:05.123319  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:05.123357  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:05.351349  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:05.412074  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:05.624006  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:05.624755  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:05.850603  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:05.911843  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:06.123592  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:06.123904  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:06.351227  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:06.411328  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:06.623228  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:06.624480  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:06.851984  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:06.914566  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:07.132814  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:07.132910  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:07.354674  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:07.415554  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:07.624792  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:07.625474  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:07.851946  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:07.914736  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:08.129077  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:08.129836  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:08.352797  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:08.414925  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:08.622934  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:08.623152  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:08.851750  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:08.914157  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:09.326254  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:09.326358  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:09.351965  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:09.411465  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:09.624060  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:09.624776  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:09.853159  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:09.914035  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:10.124243  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:10.124520  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:10.353946  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:10.411656  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:10.625098  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:10.627503  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:10.852377  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:10.916190  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:11.124797  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:11.124952  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:11.351894  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:11.412406  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:11.627773  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:11.628750  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:11.851614  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:11.912240  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:12.128658  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:12.129326  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:12.353092  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:12.414018  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:12.628888  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:12.629173  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:12.851771  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:12.913314  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:13.126827  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:13.127365  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:13.353314  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:13.412071  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:13.623002  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:13.624124  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:13.851393  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:13.914188  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:14.209522  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:08:14.571432  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:14.576144  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:14.576323  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:14.577790  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:14.624299  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:14.624748  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:14.853269  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:14.911861  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:08:15.074403  149430 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:08:15.074436  149430 retry.go:31] will retry after 12.641341891s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:08:15.123352  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:15.124342  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:15.351473  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:15.413070  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:15.624027  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:15.624347  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:15.851392  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:15.912656  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:16.122873  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:16.123699  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:16.350319  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:16.411022  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:16.631294  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:16.631492  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:16.852344  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:16.911736  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:17.123782  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:17.123786  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:17.353850  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:17.411171  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:17.624783  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:17.625038  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:17.851025  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:17.911235  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:18.124755  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:18.126749  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:18.350765  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:18.411736  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:18.628735  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:18.629158  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:18.853171  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:18.954685  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:19.123886  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:08:19.124645  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:19.350746  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:19.414563  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:19.626384  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:19.627213  149430 kapi.go:107] duration metric: took 46.007183042s to wait for kubernetes.io/minikube-addons=registry ...
	I1019 12:08:19.852026  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:19.912539  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:20.124370  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:20.517059  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:20.520696  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:20.623178  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:20.852616  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:20.912765  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:21.127457  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:21.351875  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:21.411561  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:21.622918  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:21.850826  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:21.912395  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:22.123658  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:22.351617  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:22.453071  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:22.623481  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:22.851950  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:22.911526  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:23.123537  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:23.352306  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:23.414033  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:23.625886  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:23.852023  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:23.910940  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:24.373883  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:24.377441  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:24.411331  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:24.624385  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:24.852611  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:24.911181  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:25.122475  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:25.361919  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:25.410752  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:25.625778  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:25.852306  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:25.911189  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:26.123541  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:26.351761  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:26.410811  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:26.623222  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:26.851877  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:26.911062  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:27.123005  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:27.350798  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:27.411342  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:27.623553  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:27.716691  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:08:27.851571  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:27.913699  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:28.124789  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:28.352036  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:28.412510  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:28.623844  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:08:28.706382  149430 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:08:28.706414  149430 retry.go:31] will retry after 14.706889205s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:08:28.853768  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:28.914725  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:29.123543  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:29.351677  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:29.411994  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:29.624148  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:29.853219  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:29.911978  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:30.122273  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:30.353212  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:30.411418  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:30.623801  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:30.852036  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:30.917215  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:31.124024  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:31.352073  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:31.411553  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:31.623493  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:31.854493  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:31.953100  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:32.123491  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:32.357333  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:32.412657  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:32.624466  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:32.855344  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:32.915608  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:33.123188  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:33.351844  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:33.413346  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:33.628693  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:33.851793  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:33.911034  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:34.123455  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:34.351421  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:34.411572  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:34.622596  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:34.852530  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:34.910977  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:35.122168  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:35.351578  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:35.412034  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:35.623630  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:35.852124  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:35.913076  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:36.122774  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:36.351628  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:36.454364  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:36.623487  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:36.852714  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:36.953620  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:37.124238  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:37.351845  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:37.412724  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:37.624917  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:37.850685  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:37.913546  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:38.123339  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:38.352648  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:38.413803  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:38.623197  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:38.855619  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:38.955250  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:39.125751  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:39.351332  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:39.414268  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:39.623401  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:39.851692  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:39.913827  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:40.123821  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:40.351958  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:40.412017  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:40.623175  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:40.855172  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:40.923187  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:41.130419  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:41.352555  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:41.417095  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:41.622839  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:41.854023  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:41.953080  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:42.123794  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:42.351003  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:42.412787  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:42.625216  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:42.865478  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:42.917041  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:43.126785  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:43.351372  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:43.412376  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:43.414384  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:08:43.626910  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:43.853380  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:43.912354  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:44.123966  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:44.353559  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:44.428979  149430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.014558369s)
	W1019 12:08:44.429029  149430 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:08:44.429053  149430 retry.go:31] will retry after 33.410283227s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:08:44.455390  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:44.628469  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:44.851322  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:44.913331  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:45.124077  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:45.351237  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:45.411048  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:45.623452  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:45.853533  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:45.912249  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:46.123623  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:46.353885  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:46.413146  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:46.623409  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:46.853198  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:46.911768  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:47.123322  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:47.351346  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:47.411559  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:47.625495  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:47.855683  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:47.912667  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:48.123784  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:48.351496  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:48.411753  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:48.625120  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:48.851016  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:48.912835  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:49.123272  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:49.351596  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:49.415168  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:49.624050  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:49.851108  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:49.912199  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:50.125398  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:50.352133  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:50.413052  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:50.626831  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:50.852201  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:50.913609  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:51.124686  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:51.351087  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:51.411109  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:08:51.623423  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:51.851914  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:51.915473  149430 kapi.go:107] duration metric: took 1m17.507834095s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1019 12:08:52.123129  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:52.351839  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:52.623333  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:52.851866  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:53.124344  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:53.351207  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:53.625151  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:53.851089  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:54.122992  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:54.351174  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:54.623302  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:54.851121  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:55.123469  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:55.351476  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:55.623156  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:55.850966  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:56.123474  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:56.351217  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:56.622528  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:56.851569  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:57.123465  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:57.351228  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:57.622584  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:57.852190  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:58.123992  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:58.350812  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:58.624945  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:58.851124  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:59.124105  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:59.351238  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:08:59.623564  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:08:59.851370  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:00.123840  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:00.351601  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:00.622978  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:00.850896  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:01.123803  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:01.350577  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:01.623135  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:01.850864  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:02.123275  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:02.351426  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:02.623762  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:02.852154  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:03.124172  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:03.351912  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:03.628013  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:03.850688  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:04.122924  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:04.351577  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:04.622968  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:04.851113  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:05.123312  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:05.351537  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:05.622851  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:05.850842  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:06.123060  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:06.350928  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:06.623615  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:06.851733  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:07.123696  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:07.351698  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:07.623122  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:07.851695  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:08.123836  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:08.350736  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:08.624409  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:08.851388  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:09.123427  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:09.351199  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:09.623366  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:09.851370  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:10.123250  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:10.351005  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:10.623787  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:10.850741  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:11.123334  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:11.351616  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:11.623232  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:11.850988  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:12.123819  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:12.350844  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:12.623772  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:12.850860  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:13.124417  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:13.351834  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:13.623834  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:13.851800  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:14.123177  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:14.351439  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:14.622946  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:14.851976  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:15.125189  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:15.350846  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:15.623106  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:15.851582  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:16.124070  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:16.351096  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:16.623835  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:16.850683  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:17.123304  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:17.352316  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:17.622925  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:17.840248  149430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:09:17.852245  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:18.122519  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:18.352482  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:09:18.480188  149430 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:09:18.480296  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:09:18.480314  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:09:18.480613  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:09:18.480635  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:09:18.480643  149430 main.go:141] libmachine: Making call to close driver server
	I1019 12:09:18.480650  149430 main.go:141] libmachine: (addons-360741) Calling .Close
	I1019 12:09:18.480655  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	I1019 12:09:18.480877  149430 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:09:18.480894  149430 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:09:18.480930  149430 main.go:141] libmachine: (addons-360741) DBG | Closing plugin on server side
	W1019 12:09:18.480993  149430 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1019 12:09:18.624068  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:18.852371  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:19.123088  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:19.351564  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:19.622795  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:19.850940  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:20.123170  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:20.351630  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:20.624182  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:20.851057  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:21.123185  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:21.351386  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:21.622969  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:21.851270  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:22.122942  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:22.351380  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:22.622861  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:22.852009  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:23.123526  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:23.351983  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:23.623940  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:23.851665  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:24.122690  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:24.350619  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:24.622806  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:24.850826  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:25.122996  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:25.351328  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:25.625018  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:25.851124  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:26.124089  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:26.351627  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:26.622991  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:26.850918  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:27.125962  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:27.350971  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:27.623517  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:27.852857  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:28.123427  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:28.351631  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:28.624509  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:28.852649  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:29.123655  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:29.351308  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:29.622999  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:29.851361  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:30.123188  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:30.351218  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:30.624570  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:30.851551  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:31.123177  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:31.351135  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:31.623569  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:31.851374  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:32.123093  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:32.351212  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:32.623327  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:32.852254  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:33.123368  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:33.352059  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:33.626951  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:33.851686  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:34.122905  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:34.351400  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:34.622611  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:34.851962  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:35.123751  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:35.350900  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:35.623666  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:35.850327  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:36.122782  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:36.350698  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:36.622850  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:36.850882  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:37.123307  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:37.351308  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:37.623122  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:37.851309  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:38.122356  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:38.351664  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:38.623400  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:38.851928  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:39.123912  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:39.350986  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:39.624115  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:39.851014  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:40.124155  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:40.351880  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:40.623037  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:40.851596  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:41.123069  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:41.350818  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:41.624160  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:41.850931  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:42.129478  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:42.351821  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:42.623150  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:42.851718  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:43.124073  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:43.351131  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:43.628624  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:43.852101  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:44.123157  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:44.353202  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:44.623249  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:44.851407  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:45.122995  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:45.350954  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:45.624022  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:45.851637  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:46.122960  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:46.351697  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:46.623191  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:46.850925  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:47.123359  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:47.351168  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:47.623314  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:47.851311  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:48.124320  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:48.352713  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:48.625155  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:48.854402  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:49.127309  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:49.351898  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:49.624627  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:49.851866  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:50.126430  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:50.355230  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:50.624179  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:50.852976  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:51.124848  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:51.351099  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:51.624677  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:51.851924  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:52.123145  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:52.353839  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:52.623736  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:53.090937  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:53.123603  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:53.352130  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:53.628445  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:53.854229  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:54.122939  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:54.351218  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:54.623565  149430 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:09:54.851394  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:55.129371  149430 kapi.go:107] duration metric: took 2m21.509983946s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1019 12:09:55.352513  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:55.851881  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:56.351562  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:56.850919  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:57.351173  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:57.853496  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:58.353505  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:58.855106  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:59.353384  149430 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:09:59.851557  149430 kapi.go:107] duration metric: took 2m23.003889062s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1019 12:09:59.853101  149430 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-360741 cluster.
	I1019 12:09:59.854114  149430 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1019 12:09:59.855097  149430 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1019 12:09:59.856186  149430 out.go:179] * Enabled addons: registry-creds, cloud-spanner, storage-provisioner, metrics-server, amd-gpu-device-plugin, ingress-dns, nvidia-device-plugin, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1019 12:09:59.857151  149430 addons.go:514] duration metric: took 2m34.784296207s for enable addons: enabled=[registry-creds cloud-spanner storage-provisioner metrics-server amd-gpu-device-plugin ingress-dns nvidia-device-plugin yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1019 12:09:59.857199  149430 start.go:246] waiting for cluster config update ...
	I1019 12:09:59.857217  149430 start.go:255] writing updated cluster config ...
	I1019 12:09:59.857683  149430 ssh_runner.go:195] Run: rm -f paused
	I1019 12:09:59.863184  149430 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:09:59.867055  149430 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kd585" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:09:59.872294  149430 pod_ready.go:94] pod "coredns-66bc5c9577-kd585" is "Ready"
	I1019 12:09:59.872315  149430 pod_ready.go:86] duration metric: took 5.239376ms for pod "coredns-66bc5c9577-kd585" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:09:59.874126  149430 pod_ready.go:83] waiting for pod "etcd-addons-360741" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:09:59.878693  149430 pod_ready.go:94] pod "etcd-addons-360741" is "Ready"
	I1019 12:09:59.878714  149430 pod_ready.go:86] duration metric: took 4.569935ms for pod "etcd-addons-360741" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:09:59.880628  149430 pod_ready.go:83] waiting for pod "kube-apiserver-addons-360741" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:09:59.885139  149430 pod_ready.go:94] pod "kube-apiserver-addons-360741" is "Ready"
	I1019 12:09:59.885162  149430 pod_ready.go:86] duration metric: took 4.517501ms for pod "kube-apiserver-addons-360741" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:09:59.887047  149430 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-360741" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:10:00.267748  149430 pod_ready.go:94] pod "kube-controller-manager-addons-360741" is "Ready"
	I1019 12:10:00.267775  149430 pod_ready.go:86] duration metric: took 380.710798ms for pod "kube-controller-manager-addons-360741" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:10:00.468183  149430 pod_ready.go:83] waiting for pod "kube-proxy-42tdl" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:10:00.867193  149430 pod_ready.go:94] pod "kube-proxy-42tdl" is "Ready"
	I1019 12:10:00.867227  149430 pod_ready.go:86] duration metric: took 399.014962ms for pod "kube-proxy-42tdl" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:10:01.067465  149430 pod_ready.go:83] waiting for pod "kube-scheduler-addons-360741" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:10:01.467774  149430 pod_ready.go:94] pod "kube-scheduler-addons-360741" is "Ready"
	I1019 12:10:01.467806  149430 pod_ready.go:86] duration metric: took 400.314619ms for pod "kube-scheduler-addons-360741" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:10:01.467819  149430 pod_ready.go:40] duration metric: took 1.60460272s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:10:01.516005  149430 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:10:01.517205  149430 out.go:179] * Done! kubectl is now configured to use "addons-360741" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.321579118Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d935a58b-5f5e-479e-8e1f-80f8f5b5925c name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.321889950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b04b48e564b038f83f75a57439e9216f0b7920745cfce385397dbcb6b18daf7,PodSandboxId:f862ac793ad1d1ada04e8c96c66120acd8fc80acfafb38c980cb6d563a9cec9e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760875834852414911,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4df20794-dce6-4998-a971-227e36294dea,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85eac785e81570eaf128955cddd8573a0f6645e430e670b139d71d14bd89a85c,PodSandboxId:dbc708f0c1827bbeb65b942f0d8c7b3ba160f3c90dd111cbb577316d4108596d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760875806563243711,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ddf4899-804d-4751-a068-633eef6d521f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96516075428d98000b975f6f5226f4fb68704b3673562a1b2daad0a435ee8ce8,PodSandboxId:e7d71e04779ec9ceaa91c296c3250346ed52e28395646a45e3874f97aadf702f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760875794529378512,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-frqnj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ddedea74-2afc-428f-a9d7-d1a2644b05bf,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8cf4ec445262f02f469d7d6b380b6c2cd2950803996f1831e5a86bd87e19e87c,PodSandboxId:c3e7ead96e59ec5260cc52c6bd2ff16aea5adfb47a3431395ddbdf2b2853461f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760875718689759545,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qcphd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a0ce7b76-17b7-4eb5-8a2a-1074fa5681bd,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d91bcca57a75639a4e455fe12ea53aaeffc66b08e59f802cc1cf495cc842b0,PodSandboxId:48b3b7d3ff53e79bf9ee560ed1e435dc7aa453de06451a26fb372f3fb3fecf44,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760875718357593711,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9gm2b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0c3ac6aa-c84c-4623-870f-22a66b5db52d,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7479322b0c6cbb313f88b68c729fcbb93945992e5be5a24cfa1017a19e48d060,PodSandboxId:3a48ae7bcf0087e3e76b4ce7b8b734f60504dcf14b58651f3798633e2121b3f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760875716233799300,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-tsw7m,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: bb13f53d-2da7-46ba-9bdc-63379171ccc1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f45f2f7133dcf3f20cdca09037f41bd5795631b849d9d0e29ec0da64babcffd,PodSandboxId:b5d0421050eb5494b90251b0d694b0c1df82ff78a381f811c298b299e29cb8f9,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760875711820512085,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-fc69p,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 14fd8ed2-00df-4440-ad56-9dd5d1b08268,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f107ffc1e1e5dd133e90647daf47b436e6a02fcd0ab89775d57c041b0204210,PodSandboxId:6236e6fc767982c2bdfcb86b90f9f237eb69ef64eeab836ab4b7473b2a6ca824,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase
/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760875694696659599,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f6b2cc2-bccb-4e31-9491-81ca4108d468,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6369cc774840ddaad16596ea4243f740f33bf2da668709142291dac67f71a7c,PodSandboxId:5c77e40df8225583a7b3aacea239fb557895
a8c6ccf1e464233ed09c6a749687,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760875680582978845,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-s28qs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e690aa-169f-4ed1-afd6-03256ec9b7e6,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:064c0829cbb92da55d0aa323e00c4564a33e0d5525f8bf7c1efbabb82b0c045f,PodSandbo
xId:3f9bf4e3b71fff5e6740bc2a340a0ea3696df716de5a92853d21e15edaaae160,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760875651973965080,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2a0cd2b-d30e-4eb4-a6a0-5acae325b054,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:045161ac6ac9a245f0602b29971221f2f99a4d452e9ed1c56a3ea4da99a2df00,PodSandboxId:0a297c08
5fd086f15045566abf079108832e89ccefafd6c102de4c4b26e61390,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760875648422596464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kd585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5963a053-34ba-4523-9c9c-bb9ed7e3e9b0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f372b40f3f35f3172909781f4872553a7e8003cefdf4a4b1f058bdc61fa287,PodSandboxId:3838ee9242b6db6004a2ceac1b25386236351cad7ddd582c95eb8bdc5e88116b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760875646118406386,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42tdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8dd16ff-e00f-44ee-a3d2-843647158a21,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac924b0bb50e4c1d55c4d0f397b684da4e49873e58dbc3dca84377151708c8b1,PodSandboxId:df4dcbbc7f437673e567a85103ad2169da94c790b0450c1a7cb292523449ba7a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760875634788431686,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-360741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988d31fa44461bd9ba116a38e93c586c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.contai
ner.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2588551d45e2f6de88c10603bdd40fd43687fc28aa49a2578d6a6ab5f5191f,PodSandboxId:98be4d5dab0e27e347df5a6efcc5f7e32235ed8d4f31eb28835e7017dc1dc9b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760875634763667675,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-360741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0eaef73e268f64af10d5d108d04ac194,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f488179223646627422ee33ace507f203c22c7f8944bc6de4424c23d825c86,PodSandboxId:05705c72a931ef9cd287870121be706f2060076f9ea471b07dbfc161665a5b49,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760875634743653649,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-360741,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06bccf7d5ce524f465cf635b472b1be2,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a02a7dd53a9d28b5036d3a0ce0bf08b08b42411227153f45b117ce5f0b108ac,PodSandboxId:741752780a9c070699707219d4e25e7c50a2235b00b337a007b038b9b9f31436,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760875634708755094,Labels:map[string]
string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-360741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f44478b0247f8787b225607f049fd8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d935a58b-5f5e-479e-8e1f-80f8f5b5925c name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.363097131Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b383ec25-785e-4614-9f22-9be24eb7274e name=/runtime.v1.RuntimeService/Version
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.363365159Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b383ec25-785e-4614-9f22-9be24eb7274e name=/runtime.v1.RuntimeService/Version
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.364747672Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6cf9b4d7-6534-4d0b-8833-0c85c9cdd27d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.366034308Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760875976366010692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6cf9b4d7-6534-4d0b-8833-0c85c9cdd27d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.366775238Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e23f66e-315b-4ee1-9a4b-f40ad0a0066e name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.366882849Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e23f66e-315b-4ee1-9a4b-f40ad0a0066e name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.367239838Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b04b48e564b038f83f75a57439e9216f0b7920745cfce385397dbcb6b18daf7,PodSandboxId:f862ac793ad1d1ada04e8c96c66120acd8fc80acfafb38c980cb6d563a9cec9e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760875834852414911,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4df20794-dce6-4998-a971-227e36294dea,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85eac785e81570eaf128955cddd8573a0f6645e430e670b139d71d14bd89a85c,PodSandboxId:dbc708f0c1827bbeb65b942f0d8c7b3ba160f3c90dd111cbb577316d4108596d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760875806563243711,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ddf4899-804d-4751-a068-633eef6d521f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96516075428d98000b975f6f5226f4fb68704b3673562a1b2daad0a435ee8ce8,PodSandboxId:e7d71e04779ec9ceaa91c296c3250346ed52e28395646a45e3874f97aadf702f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760875794529378512,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-frqnj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ddedea74-2afc-428f-a9d7-d1a2644b05bf,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8cf4ec445262f02f469d7d6b380b6c2cd2950803996f1831e5a86bd87e19e87c,PodSandboxId:c3e7ead96e59ec5260cc52c6bd2ff16aea5adfb47a3431395ddbdf2b2853461f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760875718689759545,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qcphd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a0ce7b76-17b7-4eb5-8a2a-1074fa5681bd,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d91bcca57a75639a4e455fe12ea53aaeffc66b08e59f802cc1cf495cc842b0,PodSandboxId:48b3b7d3ff53e79bf9ee560ed1e435dc7aa453de06451a26fb372f3fb3fecf44,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760875718357593711,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9gm2b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0c3ac6aa-c84c-4623-870f-22a66b5db52d,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7479322b0c6cbb313f88b68c729fcbb93945992e5be5a24cfa1017a19e48d060,PodSandboxId:3a48ae7bcf0087e3e76b4ce7b8b734f60504dcf14b58651f3798633e2121b3f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760875716233799300,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-tsw7m,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: bb13f53d-2da7-46ba-9bdc-63379171ccc1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f45f2f7133dcf3f20cdca09037f41bd5795631b849d9d0e29ec0da64babcffd,PodSandboxId:b5d0421050eb5494b90251b0d694b0c1df82ff78a381f811c298b299e29cb8f9,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760875711820512085,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-fc69p,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 14fd8ed2-00df-4440-ad56-9dd5d1b08268,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f107ffc1e1e5dd133e90647daf47b436e6a02fcd0ab89775d57c041b0204210,PodSandboxId:6236e6fc767982c2bdfcb86b90f9f237eb69ef64eeab836ab4b7473b2a6ca824,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase
/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760875694696659599,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f6b2cc2-bccb-4e31-9491-81ca4108d468,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6369cc774840ddaad16596ea4243f740f33bf2da668709142291dac67f71a7c,PodSandboxId:5c77e40df8225583a7b3aacea239fb557895
a8c6ccf1e464233ed09c6a749687,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760875680582978845,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-s28qs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e690aa-169f-4ed1-afd6-03256ec9b7e6,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:064c0829cbb92da55d0aa323e00c4564a33e0d5525f8bf7c1efbabb82b0c045f,PodSandbo
xId:3f9bf4e3b71fff5e6740bc2a340a0ea3696df716de5a92853d21e15edaaae160,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760875651973965080,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2a0cd2b-d30e-4eb4-a6a0-5acae325b054,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:045161ac6ac9a245f0602b29971221f2f99a4d452e9ed1c56a3ea4da99a2df00,PodSandboxId:0a297c08
5fd086f15045566abf079108832e89ccefafd6c102de4c4b26e61390,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760875648422596464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kd585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5963a053-34ba-4523-9c9c-bb9ed7e3e9b0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f372b40f3f35f3172909781f4872553a7e8003cefdf4a4b1f058bdc61fa287,PodSandboxId:3838ee9242b6db6004a2ceac1b25386236351cad7ddd582c95eb8bdc5e88116b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760875646118406386,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42tdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8dd16ff-e00f-44ee-a3d2-843647158a21,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac924b0bb50e4c1d55c4d0f397b684da4e49873e58dbc3dca84377151708c8b1,PodSandboxId:df4dcbbc7f437673e567a85103ad2169da94c790b0450c1a7cb292523449ba7a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760875634788431686,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-360741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988d31fa44461bd9ba116a38e93c586c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.contai
ner.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2588551d45e2f6de88c10603bdd40fd43687fc28aa49a2578d6a6ab5f5191f,PodSandboxId:98be4d5dab0e27e347df5a6efcc5f7e32235ed8d4f31eb28835e7017dc1dc9b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760875634763667675,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-360741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0eaef73e268f64af10d5d108d04ac194,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f488179223646627422ee33ace507f203c22c7f8944bc6de4424c23d825c86,PodSandboxId:05705c72a931ef9cd287870121be706f2060076f9ea471b07dbfc161665a5b49,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760875634743653649,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-360741,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06bccf7d5ce524f465cf635b472b1be2,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a02a7dd53a9d28b5036d3a0ce0bf08b08b42411227153f45b117ce5f0b108ac,PodSandboxId:741752780a9c070699707219d4e25e7c50a2235b00b337a007b038b9b9f31436,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760875634708755094,Labels:map[string]
string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-360741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f44478b0247f8787b225607f049fd8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e23f66e-315b-4ee1-9a4b-f40ad0a0066e name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.402669748Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=42032536-bfad-4305-804d-8756f40a11cf name=/runtime.v1.RuntimeService/Version
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.402734592Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42032536-bfad-4305-804d-8756f40a11cf name=/runtime.v1.RuntimeService/Version
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.403960018Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9194d019-55c0-4c34-9f7c-4d5459ca8659 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.405432457Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760875976405408710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9194d019-55c0-4c34-9f7c-4d5459ca8659 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.405924436Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a575926d-5fd6-4fc0-a675-2bac9b655baa name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.405980474Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a575926d-5fd6-4fc0-a675-2bac9b655baa name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.406318650Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b04b48e564b038f83f75a57439e9216f0b7920745cfce385397dbcb6b18daf7,PodSandboxId:f862ac793ad1d1ada04e8c96c66120acd8fc80acfafb38c980cb6d563a9cec9e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760875834852414911,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4df20794-dce6-4998-a971-227e36294dea,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85eac785e81570eaf128955cddd8573a0f6645e430e670b139d71d14bd89a85c,PodSandboxId:dbc708f0c1827bbeb65b942f0d8c7b3ba160f3c90dd111cbb577316d4108596d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760875806563243711,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ddf4899-804d-4751-a068-633eef6d521f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96516075428d98000b975f6f5226f4fb68704b3673562a1b2daad0a435ee8ce8,PodSandboxId:e7d71e04779ec9ceaa91c296c3250346ed52e28395646a45e3874f97aadf702f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760875794529378512,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-frqnj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ddedea74-2afc-428f-a9d7-d1a2644b05bf,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8cf4ec445262f02f469d7d6b380b6c2cd2950803996f1831e5a86bd87e19e87c,PodSandboxId:c3e7ead96e59ec5260cc52c6bd2ff16aea5adfb47a3431395ddbdf2b2853461f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760875718689759545,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qcphd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a0ce7b76-17b7-4eb5-8a2a-1074fa5681bd,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d91bcca57a75639a4e455fe12ea53aaeffc66b08e59f802cc1cf495cc842b0,PodSandboxId:48b3b7d3ff53e79bf9ee560ed1e435dc7aa453de06451a26fb372f3fb3fecf44,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760875718357593711,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9gm2b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0c3ac6aa-c84c-4623-870f-22a66b5db52d,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7479322b0c6cbb313f88b68c729fcbb93945992e5be5a24cfa1017a19e48d060,PodSandboxId:3a48ae7bcf0087e3e76b4ce7b8b734f60504dcf14b58651f3798633e2121b3f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760875716233799300,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-tsw7m,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: bb13f53d-2da7-46ba-9bdc-63379171ccc1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f45f2f7133dcf3f20cdca09037f41bd5795631b849d9d0e29ec0da64babcffd,PodSandboxId:b5d0421050eb5494b90251b0d694b0c1df82ff78a381f811c298b299e29cb8f9,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760875711820512085,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-fc69p,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 14fd8ed2-00df-4440-ad56-9dd5d1b08268,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f107ffc1e1e5dd133e90647daf47b436e6a02fcd0ab89775d57c041b0204210,PodSandboxId:6236e6fc767982c2bdfcb86b90f9f237eb69ef64eeab836ab4b7473b2a6ca824,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase
/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760875694696659599,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f6b2cc2-bccb-4e31-9491-81ca4108d468,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6369cc774840ddaad16596ea4243f740f33bf2da668709142291dac67f71a7c,PodSandboxId:5c77e40df8225583a7b3aacea239fb557895
a8c6ccf1e464233ed09c6a749687,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760875680582978845,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-s28qs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e690aa-169f-4ed1-afd6-03256ec9b7e6,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:064c0829cbb92da55d0aa323e00c4564a33e0d5525f8bf7c1efbabb82b0c045f,PodSandbo
xId:3f9bf4e3b71fff5e6740bc2a340a0ea3696df716de5a92853d21e15edaaae160,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760875651973965080,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2a0cd2b-d30e-4eb4-a6a0-5acae325b054,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:045161ac6ac9a245f0602b29971221f2f99a4d452e9ed1c56a3ea4da99a2df00,PodSandboxId:0a297c08
5fd086f15045566abf079108832e89ccefafd6c102de4c4b26e61390,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760875648422596464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kd585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5963a053-34ba-4523-9c9c-bb9ed7e3e9b0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f372b40f3f35f3172909781f4872553a7e8003cefdf4a4b1f058bdc61fa287,PodSandboxId:3838ee9242b6db6004a2ceac1b25386236351cad7ddd582c95eb8bdc5e88116b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760875646118406386,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42tdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8dd16ff-e00f-44ee-a3d2-843647158a21,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac924b0bb50e4c1d55c4d0f397b684da4e49873e58dbc3dca84377151708c8b1,PodSandboxId:df4dcbbc7f437673e567a85103ad2169da94c790b0450c1a7cb292523449ba7a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760875634788431686,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-360741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988d31fa44461bd9ba116a38e93c586c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.contai
ner.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2588551d45e2f6de88c10603bdd40fd43687fc28aa49a2578d6a6ab5f5191f,PodSandboxId:98be4d5dab0e27e347df5a6efcc5f7e32235ed8d4f31eb28835e7017dc1dc9b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760875634763667675,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-360741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0eaef73e268f64af10d5d108d04ac194,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f488179223646627422ee33ace507f203c22c7f8944bc6de4424c23d825c86,PodSandboxId:05705c72a931ef9cd287870121be706f2060076f9ea471b07dbfc161665a5b49,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760875634743653649,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-360741,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06bccf7d5ce524f465cf635b472b1be2,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a02a7dd53a9d28b5036d3a0ce0bf08b08b42411227153f45b117ce5f0b108ac,PodSandboxId:741752780a9c070699707219d4e25e7c50a2235b00b337a007b038b9b9f31436,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760875634708755094,Labels:map[string]
string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-360741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f44478b0247f8787b225607f049fd8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a575926d-5fd6-4fc0-a675-2bac9b655baa name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.436670526Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=4254794e-8466-4cfb-9e90-29dee76d80bc name=/runtime.v1.RuntimeService/Version
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.436739463Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4254794e-8466-4cfb-9e90-29dee76d80bc name=/runtime.v1.RuntimeService/Version
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.441015411Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b9546f82-6830-48ff-b4b4-7462903d4308 name=/runtime.v1.RuntimeService/Version
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.441091174Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b9546f82-6830-48ff-b4b4-7462903d4308 name=/runtime.v1.RuntimeService/Version
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.443160753Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2dbb2a2c-e775-4e32-b58c-c2204b6e3125 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.446421845Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760875976446388561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2dbb2a2c-e775-4e32-b58c-c2204b6e3125 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.449066225Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=667450ad-d791-4036-abd0-18a3712dc735 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.449152906Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=667450ad-d791-4036-abd0-18a3712dc735 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:12:56 addons-360741 crio[814]: time="2025-10-19 12:12:56.449640532Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b04b48e564b038f83f75a57439e9216f0b7920745cfce385397dbcb6b18daf7,PodSandboxId:f862ac793ad1d1ada04e8c96c66120acd8fc80acfafb38c980cb6d563a9cec9e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760875834852414911,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4df20794-dce6-4998-a971-227e36294dea,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85eac785e81570eaf128955cddd8573a0f6645e430e670b139d71d14bd89a85c,PodSandboxId:dbc708f0c1827bbeb65b942f0d8c7b3ba160f3c90dd111cbb577316d4108596d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760875806563243711,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ddf4899-804d-4751-a068-633eef6d521f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96516075428d98000b975f6f5226f4fb68704b3673562a1b2daad0a435ee8ce8,PodSandboxId:e7d71e04779ec9ceaa91c296c3250346ed52e28395646a45e3874f97aadf702f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760875794529378512,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-frqnj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ddedea74-2afc-428f-a9d7-d1a2644b05bf,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8cf4ec445262f02f469d7d6b380b6c2cd2950803996f1831e5a86bd87e19e87c,PodSandboxId:c3e7ead96e59ec5260cc52c6bd2ff16aea5adfb47a3431395ddbdf2b2853461f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760875718689759545,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qcphd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a0ce7b76-17b7-4eb5-8a2a-1074fa5681bd,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d91bcca57a75639a4e455fe12ea53aaeffc66b08e59f802cc1cf495cc842b0,PodSandboxId:48b3b7d3ff53e79bf9ee560ed1e435dc7aa453de06451a26fb372f3fb3fecf44,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760875718357593711,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9gm2b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0c3ac6aa-c84c-4623-870f-22a66b5db52d,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7479322b0c6cbb313f88b68c729fcbb93945992e5be5a24cfa1017a19e48d060,PodSandboxId:3a48ae7bcf0087e3e76b4ce7b8b734f60504dcf14b58651f3798633e2121b3f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760875716233799300,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-tsw7m,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: bb13f53d-2da7-46ba-9bdc-63379171ccc1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f45f2f7133dcf3f20cdca09037f41bd5795631b849d9d0e29ec0da64babcffd,PodSandboxId:b5d0421050eb5494b90251b0d694b0c1df82ff78a381f811c298b299e29cb8f9,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760875711820512085,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-fc69p,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 14fd8ed2-00df-4440-ad56-9dd5d1b08268,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f107ffc1e1e5dd133e90647daf47b436e6a02fcd0ab89775d57c041b0204210,PodSandboxId:6236e6fc767982c2bdfcb86b90f9f237eb69ef64eeab836ab4b7473b2a6ca824,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase
/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760875694696659599,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f6b2cc2-bccb-4e31-9491-81ca4108d468,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6369cc774840ddaad16596ea4243f740f33bf2da668709142291dac67f71a7c,PodSandboxId:5c77e40df8225583a7b3aacea239fb557895
a8c6ccf1e464233ed09c6a749687,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760875680582978845,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-s28qs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e690aa-169f-4ed1-afd6-03256ec9b7e6,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:064c0829cbb92da55d0aa323e00c4564a33e0d5525f8bf7c1efbabb82b0c045f,PodSandbo
xId:3f9bf4e3b71fff5e6740bc2a340a0ea3696df716de5a92853d21e15edaaae160,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760875651973965080,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2a0cd2b-d30e-4eb4-a6a0-5acae325b054,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:045161ac6ac9a245f0602b29971221f2f99a4d452e9ed1c56a3ea4da99a2df00,PodSandboxId:0a297c08
5fd086f15045566abf079108832e89ccefafd6c102de4c4b26e61390,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760875648422596464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kd585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5963a053-34ba-4523-9c9c-bb9ed7e3e9b0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f372b40f3f35f3172909781f4872553a7e8003cefdf4a4b1f058bdc61fa287,PodSandboxId:3838ee9242b6db6004a2ceac1b25386236351cad7ddd582c95eb8bdc5e88116b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760875646118406386,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42tdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8dd16ff-e00f-44ee-a3d2-843647158a21,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac924b0bb50e4c1d55c4d0f397b684da4e49873e58dbc3dca84377151708c8b1,PodSandboxId:df4dcbbc7f437673e567a85103ad2169da94c790b0450c1a7cb292523449ba7a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760875634788431686,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-360741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988d31fa44461bd9ba116a38e93c586c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.contai
ner.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2588551d45e2f6de88c10603bdd40fd43687fc28aa49a2578d6a6ab5f5191f,PodSandboxId:98be4d5dab0e27e347df5a6efcc5f7e32235ed8d4f31eb28835e7017dc1dc9b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760875634763667675,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-360741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0eaef73e268f64af10d5d108d04ac194,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f488179223646627422ee33ace507f203c22c7f8944bc6de4424c23d825c86,PodSandboxId:05705c72a931ef9cd287870121be706f2060076f9ea471b07dbfc161665a5b49,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760875634743653649,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-360741,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06bccf7d5ce524f465cf635b472b1be2,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a02a7dd53a9d28b5036d3a0ce0bf08b08b42411227153f45b117ce5f0b108ac,PodSandboxId:741752780a9c070699707219d4e25e7c50a2235b00b337a007b038b9b9f31436,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760875634708755094,Labels:map[string]
string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-360741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f44478b0247f8787b225607f049fd8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=667450ad-d791-4036-abd0-18a3712dc735 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6b04b48e564b0       docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                              2 minutes ago       Running             nginx                     0                   f862ac793ad1d       nginx
	85eac785e8157       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   dbc708f0c1827       busybox
	96516075428d9       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             3 minutes ago       Running             controller                0                   e7d71e04779ec       ingress-nginx-controller-675c5ddd98-frqnj
	8cf4ec445262f       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                             4 minutes ago       Exited              patch                     1                   c3e7ead96e59e       ingress-nginx-admission-patch-qcphd
	e7d91bcca57a7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   4 minutes ago       Exited              create                    0                   48b3b7d3ff53e       ingress-nginx-admission-create-9gm2b
	7479322b0c6cb       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   3a48ae7bcf008       local-path-provisioner-648f6765c9-tsw7m
	8f45f2f7133dc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            4 minutes ago       Running             gadget                    0                   b5d0421050eb5       gadget-fc69p
	0f107ffc1e1e5       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   6236e6fc76798       kube-ingress-dns-minikube
	c6369cc774840       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   5c77e40df8225       amd-gpu-device-plugin-s28qs
	064c0829cbb92       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   3f9bf4e3b71ff       storage-provisioner
	045161ac6ac9a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   0a297c085fd08       coredns-66bc5c9577-kd585
	54f372b40f3f3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   3838ee9242b6d       kube-proxy-42tdl
	ac924b0bb50e4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   df4dcbbc7f437       kube-scheduler-addons-360741
	2b2588551d45e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   98be4d5dab0e2       etcd-addons-360741
	b3f4881792236       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   05705c72a931e       kube-apiserver-addons-360741
	9a02a7dd53a9d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   741752780a9c0       kube-controller-manager-addons-360741
	
	
	==> coredns [045161ac6ac9a245f0602b29971221f2f99a4d452e9ed1c56a3ea4da99a2df00] <==
	[INFO] 10.244.0.8:40733 - 50685 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000125378s
	[INFO] 10.244.0.8:40733 - 4477 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000127776s
	[INFO] 10.244.0.8:40733 - 51940 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000080648s
	[INFO] 10.244.0.8:40733 - 6360 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000191652s
	[INFO] 10.244.0.8:40733 - 47328 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000075695s
	[INFO] 10.244.0.8:40733 - 6625 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000076745s
	[INFO] 10.244.0.8:40733 - 58136 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000205057s
	[INFO] 10.244.0.8:33168 - 57216 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000073515s
	[INFO] 10.244.0.8:33168 - 57473 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000137832s
	[INFO] 10.244.0.8:43592 - 7658 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000105389s
	[INFO] 10.244.0.8:43592 - 7951 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000106812s
	[INFO] 10.244.0.8:35431 - 61858 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099554s
	[INFO] 10.244.0.8:35431 - 62133 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000140644s
	[INFO] 10.244.0.8:41583 - 16372 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000173506s
	[INFO] 10.244.0.8:41583 - 16574 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00006792s
	[INFO] 10.244.0.23:59208 - 17132 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000340836s
	[INFO] 10.244.0.23:49537 - 42010 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000213365s
	[INFO] 10.244.0.23:32889 - 64110 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000109919s
	[INFO] 10.244.0.23:40292 - 20965 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122054s
	[INFO] 10.244.0.23:56312 - 31505 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101373s
	[INFO] 10.244.0.23:36221 - 14568 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000184432s
	[INFO] 10.244.0.23:57026 - 58774 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.003953365s
	[INFO] 10.244.0.23:59091 - 1476 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004172497s
	[INFO] 10.244.0.27:51515 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000278593s
	[INFO] 10.244.0.27:45082 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000135388s
	
	
	==> describe nodes <==
	Name:               addons-360741
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-360741
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=addons-360741
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_07_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-360741
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:07:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-360741
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:12:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:11:24 +0000   Sun, 19 Oct 2025 12:07:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:11:24 +0000   Sun, 19 Oct 2025 12:07:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:11:24 +0000   Sun, 19 Oct 2025 12:07:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:11:24 +0000   Sun, 19 Oct 2025 12:07:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.35
	  Hostname:    addons-360741
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	System Info:
	  Machine ID:                 0563e0d098964b39a2960acfe6f230dc
	  System UUID:                0563e0d0-9896-4b39-a296-0acfe6f230dc
	  Boot ID:                    5ce46595-1872-462b-a6cf-f5f0ace7ccc1
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  default                     hello-world-app-5d498dc89-4p68l              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-fc69p                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-frqnj    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m23s
	  kube-system                 amd-gpu-device-plugin-s28qs                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 coredns-66bc5c9577-kd585                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m31s
	  kube-system                 etcd-addons-360741                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m36s
	  kube-system                 kube-apiserver-addons-360741                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-controller-manager-addons-360741        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-proxy-42tdl                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-scheduler-addons-360741                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  local-path-storage          local-path-provisioner-648f6765c9-tsw7m      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m29s  kube-proxy       
	  Normal  Starting                 5m37s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m36s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m36s  kubelet          Node addons-360741 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m36s  kubelet          Node addons-360741 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m36s  kubelet          Node addons-360741 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m35s  kubelet          Node addons-360741 status is now: NodeReady
	  Normal  RegisteredNode           5m32s  node-controller  Node addons-360741 event: Registered Node addons-360741 in Controller
	
	
	==> dmesg <==
	[  +7.350196] kauditd_printk_skb: 5 callbacks suppressed
	[Oct19 12:08] kauditd_printk_skb: 26 callbacks suppressed
	[ +10.137464] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.934089] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.613903] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.582683] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.711414] kauditd_printk_skb: 50 callbacks suppressed
	[  +3.543115] kauditd_printk_skb: 112 callbacks suppressed
	[  +3.902451] kauditd_printk_skb: 120 callbacks suppressed
	[Oct19 12:09] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.499465] kauditd_printk_skb: 29 callbacks suppressed
	[  +4.749509] kauditd_printk_skb: 68 callbacks suppressed
	[Oct19 12:10] kauditd_printk_skb: 32 callbacks suppressed
	[  +8.923370] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.913232] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.834640] kauditd_printk_skb: 38 callbacks suppressed
	[  +3.599497] kauditd_printk_skb: 141 callbacks suppressed
	[  +1.524334] kauditd_printk_skb: 88 callbacks suppressed
	[  +4.916886] kauditd_printk_skb: 65 callbacks suppressed
	[  +2.163682] kauditd_printk_skb: 88 callbacks suppressed
	[  +1.583546] kauditd_printk_skb: 98 callbacks suppressed
	[Oct19 12:11] kauditd_printk_skb: 76 callbacks suppressed
	[  +0.000054] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.862872] kauditd_printk_skb: 41 callbacks suppressed
	[Oct19 12:12] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [2b2588551d45e2f6de88c10603bdd40fd43687fc28aa49a2578d6a6ab5f5191f] <==
	{"level":"warn","ts":"2025-10-19T12:08:20.513564Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.154569ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T12:08:20.514001Z","caller":"traceutil/trace.go:172","msg":"trace[198371828] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:954; }","duration":"106.60001ms","start":"2025-10-19T12:08:20.407392Z","end":"2025-10-19T12:08:20.513992Z","steps":["trace[198371828] 'agreement among raft nodes before linearized reading'  (duration: 105.987792ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:08:24.233309Z","caller":"traceutil/trace.go:172","msg":"trace[587183813] linearizableReadLoop","detail":"{readStateIndex:1003; appliedIndex:1003; }","duration":"127.625918ms","start":"2025-10-19T12:08:24.105670Z","end":"2025-10-19T12:08:24.233295Z","steps":["trace[587183813] 'read index received'  (duration: 127.620358ms)","trace[587183813] 'applied index is now lower than readState.Index'  (duration: 4.757µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T12:08:24.233434Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.750362ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T12:08:24.233511Z","caller":"traceutil/trace.go:172","msg":"trace[341454824] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:974; }","duration":"127.840506ms","start":"2025-10-19T12:08:24.105665Z","end":"2025-10-19T12:08:24.233505Z","steps":["trace[341454824] 'agreement among raft nodes before linearized reading'  (duration: 127.725208ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T12:08:24.369027Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.360234ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16803099518905191976 > lease_revoke:<id:693099fc5de18d32>","response":"size:27"}
	{"level":"info","ts":"2025-10-19T12:08:24.369106Z","caller":"traceutil/trace.go:172","msg":"trace[13273556] linearizableReadLoop","detail":"{readStateIndex:1004; appliedIndex:1003; }","duration":"135.736286ms","start":"2025-10-19T12:08:24.233361Z","end":"2025-10-19T12:08:24.369097Z","steps":["trace[13273556] 'read index received'  (duration: 12.975µs)","trace[13273556] 'applied index is now lower than readState.Index'  (duration: 135.722554ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T12:08:24.369181Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"250.120882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T12:08:24.369195Z","caller":"traceutil/trace.go:172","msg":"trace[1203726968] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:974; }","duration":"250.141146ms","start":"2025-10-19T12:08:24.119049Z","end":"2025-10-19T12:08:24.369191Z","steps":["trace[1203726968] 'agreement among raft nodes before linearized reading'  (duration: 250.099382ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T12:08:24.369215Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.999253ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T12:08:24.369271Z","caller":"traceutil/trace.go:172","msg":"trace[635476352] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:974; }","duration":"134.061003ms","start":"2025-10-19T12:08:24.235202Z","end":"2025-10-19T12:08:24.369263Z","steps":["trace[635476352] 'agreement among raft nodes before linearized reading'  (duration: 133.975927ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:09:21.265217Z","caller":"traceutil/trace.go:172","msg":"trace[1141709666] transaction","detail":"{read_only:false; response_revision:1177; number_of_response:1; }","duration":"137.412401ms","start":"2025-10-19T12:09:21.127790Z","end":"2025-10-19T12:09:21.265203Z","steps":["trace[1141709666] 'process raft request'  (duration: 137.315435ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:09:53.083195Z","caller":"traceutil/trace.go:172","msg":"trace[1281324591] linearizableReadLoop","detail":"{readStateIndex:1267; appliedIndex:1267; }","duration":"238.471306ms","start":"2025-10-19T12:09:52.844700Z","end":"2025-10-19T12:09:53.083172Z","steps":["trace[1281324591] 'read index received'  (duration: 238.465252ms)","trace[1281324591] 'applied index is now lower than readState.Index'  (duration: 5.121µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T12:09:53.083319Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"238.615437ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T12:09:53.083389Z","caller":"traceutil/trace.go:172","msg":"trace[247450732] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1219; }","duration":"238.704559ms","start":"2025-10-19T12:09:52.844676Z","end":"2025-10-19T12:09:53.083380Z","steps":["trace[247450732] 'agreement among raft nodes before linearized reading'  (duration: 238.577185ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:09:53.083649Z","caller":"traceutil/trace.go:172","msg":"trace[2043329158] transaction","detail":"{read_only:false; response_revision:1220; number_of_response:1; }","duration":"262.538607ms","start":"2025-10-19T12:09:52.821101Z","end":"2025-10-19T12:09:53.083639Z","steps":["trace[2043329158] 'process raft request'  (duration: 262.194933ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:10:03.659762Z","caller":"traceutil/trace.go:172","msg":"trace[750000786] transaction","detail":"{read_only:false; response_revision:1275; number_of_response:1; }","duration":"103.926538ms","start":"2025-10-19T12:10:03.555822Z","end":"2025-10-19T12:10:03.659748Z","steps":["trace[750000786] 'process raft request'  (duration: 103.560915ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:10:27.633650Z","caller":"traceutil/trace.go:172","msg":"trace[1253399024] linearizableReadLoop","detail":"{readStateIndex:1455; appliedIndex:1455; }","duration":"177.883689ms","start":"2025-10-19T12:10:27.455749Z","end":"2025-10-19T12:10:27.633633Z","steps":["trace[1253399024] 'read index received'  (duration: 177.876026ms)","trace[1253399024] 'applied index is now lower than readState.Index'  (duration: 6.539µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-19T12:10:27.633764Z","caller":"traceutil/trace.go:172","msg":"trace[1369922164] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1399; }","duration":"264.66434ms","start":"2025-10-19T12:10:27.369090Z","end":"2025-10-19T12:10:27.633755Z","steps":["trace[1369922164] 'process raft request'  (duration: 264.562671ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T12:10:27.633853Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.086999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" limit:1 ","response":"range_response_count:1 size:2270"}
	{"level":"info","ts":"2025-10-19T12:10:27.633876Z","caller":"traceutil/trace.go:172","msg":"trace[1275611392] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:1; response_revision:1399; }","duration":"178.125063ms","start":"2025-10-19T12:10:27.455745Z","end":"2025-10-19T12:10:27.633871Z","steps":["trace[1275611392] 'agreement among raft nodes before linearized reading'  (duration: 178.014715ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T12:10:27.634006Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.501807ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T12:10:27.634020Z","caller":"traceutil/trace.go:172","msg":"trace[1502512854] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1399; }","duration":"154.518314ms","start":"2025-10-19T12:10:27.479497Z","end":"2025-10-19T12:10:27.634015Z","steps":["trace[1502512854] 'agreement among raft nodes before linearized reading'  (duration: 154.493234ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:10:44.747844Z","caller":"traceutil/trace.go:172","msg":"trace[454764584] transaction","detail":"{read_only:false; response_revision:1570; number_of_response:1; }","duration":"216.203891ms","start":"2025-10-19T12:10:44.531628Z","end":"2025-10-19T12:10:44.747832Z","steps":["trace[454764584] 'process raft request'  (duration: 216.119305ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:11:16.017207Z","caller":"traceutil/trace.go:172","msg":"trace[772071571] transaction","detail":"{read_only:false; response_revision:1753; number_of_response:1; }","duration":"200.990108ms","start":"2025-10-19T12:11:15.816204Z","end":"2025-10-19T12:11:16.017194Z","steps":["trace[772071571] 'process raft request'  (duration: 200.886226ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:12:56 up 6 min,  0 users,  load average: 0.35, 0.96, 0.55
	Linux addons-360741 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [b3f488179223646627422ee33ace507f203c22c7f8944bc6de4424c23d825c86] <==
	E1019 12:08:23.568286       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.69.214:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.69.214:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.69.214:443: connect: connection refused" logger="UnhandledError"
	E1019 12:08:23.570371       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.69.214:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.69.214:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.69.214:443: connect: connection refused" logger="UnhandledError"
	E1019 12:08:23.581644       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.69.214:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.69.214:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.69.214:443: connect: connection refused" logger="UnhandledError"
	I1019 12:08:23.710274       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1019 12:10:12.282389       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:57326: use of closed network connection
	E1019 12:10:12.475734       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:57350: use of closed network connection
	I1019 12:10:21.603847       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.219.91"}
	I1019 12:10:28.295966       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1019 12:10:28.490581       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.219.161"}
	I1019 12:10:53.128228       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1019 12:11:23.960257       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1019 12:11:23.960433       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1019 12:11:23.987873       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1019 12:11:23.987934       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1019 12:11:24.014690       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1019 12:11:24.014781       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1019 12:11:24.019080       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1019 12:11:24.019127       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1019 12:11:24.060069       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1019 12:11:24.060222       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1019 12:11:24.588002       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	W1019 12:11:25.016400       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1019 12:11:25.060216       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1019 12:11:25.181433       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1019 12:12:55.247845       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.237.104"}
	
	
	==> kube-controller-manager [9a02a7dd53a9d28b5036d3a0ce0bf08b08b42411227153f45b117ce5f0b108ac] <==
	E1019 12:11:29.382002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 12:11:33.466361       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 12:11:33.467416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 12:11:33.999983       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 12:11:34.000907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 12:11:35.729352       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 12:11:35.730342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 12:11:41.394191       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 12:11:41.395282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 12:11:46.075086       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 12:11:46.076188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 12:11:48.049713       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 12:11:48.050607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 12:12:06.801553       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 12:12:06.802495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 12:12:09.051506       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 12:12:09.052555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 12:12:10.542047       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 12:12:10.543441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 12:12:45.926532       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 12:12:45.927673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 12:12:53.824124       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 12:12:53.825069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 12:12:54.350906       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 12:12:54.351958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [54f372b40f3f35f3172909781f4872553a7e8003cefdf4a4b1f058bdc61fa287] <==
	I1019 12:07:26.541249       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:07:26.641640       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:07:26.641665       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.35"]
	E1019 12:07:26.641719       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:07:26.816150       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1019 12:07:26.816278       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1019 12:07:26.816408       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:07:26.872923       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:07:26.873840       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:07:26.873852       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:07:26.881546       1 config.go:200] "Starting service config controller"
	I1019 12:07:26.881563       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:07:26.881605       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:07:26.881612       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:07:26.881626       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:07:26.881630       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:07:26.882365       1 config.go:309] "Starting node config controller"
	I1019 12:07:26.882371       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:07:26.882376       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:07:26.982712       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 12:07:26.982737       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 12:07:26.982802       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ac924b0bb50e4c1d55c4d0f397b684da4e49873e58dbc3dca84377151708c8b1] <==
	E1019 12:07:17.374769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 12:07:17.374810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 12:07:17.374860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 12:07:17.374890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 12:07:17.378818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 12:07:17.378914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 12:07:17.378963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 12:07:17.379048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 12:07:17.379122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 12:07:17.379160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 12:07:17.379192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 12:07:17.379229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 12:07:17.379259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 12:07:18.192517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 12:07:18.196374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 12:07:18.242575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 12:07:18.362277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 12:07:18.392496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 12:07:18.441593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 12:07:18.477053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 12:07:18.550010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 12:07:18.563108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 12:07:18.566514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 12:07:18.567854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1019 12:07:18.959526       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 12:11:27 addons-360741 kubelet[1497]: I1019 12:11:27.984730    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:11:27 addons-360741 kubelet[1497]: I1019 12:11:27.992186    1497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c5937c4-17bd-4d2b-a415-6932ce2ed159" path="/var/lib/kubelet/pods/2c5937c4-17bd-4d2b-a415-6932ce2ed159/volumes"
	Oct 19 12:11:27 addons-360741 kubelet[1497]: I1019 12:11:27.992587    1497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a04738a0-6b16-44cb-8a92-a3006a219e20" path="/var/lib/kubelet/pods/a04738a0-6b16-44cb-8a92-a3006a219e20/volumes"
	Oct 19 12:11:27 addons-360741 kubelet[1497]: I1019 12:11:27.992881    1497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0a901ea-2007-41b2-b481-98f6070ab3df" path="/var/lib/kubelet/pods/e0a901ea-2007-41b2-b481-98f6070ab3df/volumes"
	Oct 19 12:11:30 addons-360741 kubelet[1497]: E1019 12:11:30.174070    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760875890173635999  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 12:11:30 addons-360741 kubelet[1497]: E1019 12:11:30.174091    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760875890173635999  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 12:11:36 addons-360741 kubelet[1497]: I1019 12:11:36.985062    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-s28qs" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:11:40 addons-360741 kubelet[1497]: E1019 12:11:40.176958    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760875900176401196  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 12:11:40 addons-360741 kubelet[1497]: E1019 12:11:40.176979    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760875900176401196  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 12:11:50 addons-360741 kubelet[1497]: E1019 12:11:50.180028    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760875910179206922  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 12:11:50 addons-360741 kubelet[1497]: E1019 12:11:50.180052    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760875910179206922  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 12:12:00 addons-360741 kubelet[1497]: E1019 12:12:00.183932    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760875920183425395  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 12:12:00 addons-360741 kubelet[1497]: E1019 12:12:00.183985    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760875920183425395  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 12:12:10 addons-360741 kubelet[1497]: E1019 12:12:10.190076    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760875930189268331  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 12:12:10 addons-360741 kubelet[1497]: E1019 12:12:10.190431    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760875930189268331  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 12:12:20 addons-360741 kubelet[1497]: E1019 12:12:20.192889    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760875940192389775  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 12:12:20 addons-360741 kubelet[1497]: E1019 12:12:20.193261    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760875940192389775  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 12:12:30 addons-360741 kubelet[1497]: E1019 12:12:30.195915    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760875950195101045  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 12:12:30 addons-360741 kubelet[1497]: E1019 12:12:30.195978    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760875950195101045  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 12:12:40 addons-360741 kubelet[1497]: E1019 12:12:40.198043    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760875960197547172  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 12:12:40 addons-360741 kubelet[1497]: E1019 12:12:40.198067    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760875960197547172  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 12:12:45 addons-360741 kubelet[1497]: I1019 12:12:45.984971    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:12:50 addons-360741 kubelet[1497]: E1019 12:12:50.200415    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760875970199993055  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 12:12:50 addons-360741 kubelet[1497]: E1019 12:12:50.200437    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760875970199993055  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 12:12:55 addons-360741 kubelet[1497]: I1019 12:12:55.294307    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46qhj\" (UniqueName: \"kubernetes.io/projected/720a351e-c3f9-4f30-9efc-b1eeaf8eca71-kube-api-access-46qhj\") pod \"hello-world-app-5d498dc89-4p68l\" (UID: \"720a351e-c3f9-4f30-9efc-b1eeaf8eca71\") " pod="default/hello-world-app-5d498dc89-4p68l"
	
	
	==> storage-provisioner [064c0829cbb92da55d0aa323e00c4564a33e0d5525f8bf7c1efbabb82b0c045f] <==
	W1019 12:12:32.583018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:34.586779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:34.593261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:36.597140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:36.601490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:38.604964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:38.610981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:40.614853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:40.619569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:42.622710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:42.627623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:44.630438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:44.634851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:46.638407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:46.645526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:48.648893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:48.655695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:50.659607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:50.667055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:52.671123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:52.676171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:54.679587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:54.684936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:56.689578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:12:56.694900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-360741 -n addons-360741
helpers_test.go:269: (dbg) Run:  kubectl --context addons-360741 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-4p68l ingress-nginx-admission-create-9gm2b ingress-nginx-admission-patch-qcphd
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-360741 describe pod hello-world-app-5d498dc89-4p68l ingress-nginx-admission-create-9gm2b ingress-nginx-admission-patch-qcphd
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-360741 describe pod hello-world-app-5d498dc89-4p68l ingress-nginx-admission-create-9gm2b ingress-nginx-admission-patch-qcphd: exit status 1 (68.699068ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-4p68l
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-360741/192.168.39.35
	Start Time:       Sun, 19 Oct 2025 12:12:55 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-46qhj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-46qhj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-4p68l to addons-360741
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-9gm2b" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-qcphd" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-360741 describe pod hello-world-app-5d498dc89-4p68l ingress-nginx-admission-create-9gm2b ingress-nginx-admission-patch-qcphd: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-360741 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-360741 addons disable ingress-dns --alsologtostderr -v=1: (1.182385722s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-360741 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-360741 addons disable ingress --alsologtostderr -v=1: (7.690910274s)
--- FAIL: TestAddons/parallel/Ingress (158.48s)

                                                
                                    
x
+
TestPreload (166.68s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-621731 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
E1019 12:56:12.670616  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-621731 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m36.749296979s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-621731 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-621731 image pull gcr.io/k8s-minikube/busybox: (3.408459861s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-621731
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-621731: (6.814394128s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-621731 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1019 12:58:09.603614  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-621731 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (56.727976116s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-621731 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-19 12:58:36.355830321 +0000 UTC m=+3151.174908375
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-621731 -n test-preload-621731
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-621731 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-621731 logs -n 25: (1.072182874s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-875731 ssh -n multinode-875731-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-875731     │ jenkins │ v1.37.0 │ 19 Oct 25 12:45 UTC │ 19 Oct 25 12:45 UTC │
	│ ssh     │ multinode-875731 ssh -n multinode-875731 sudo cat /home/docker/cp-test_multinode-875731-m03_multinode-875731.txt                                                                    │ multinode-875731     │ jenkins │ v1.37.0 │ 19 Oct 25 12:45 UTC │ 19 Oct 25 12:45 UTC │
	│ cp      │ multinode-875731 cp multinode-875731-m03:/home/docker/cp-test.txt multinode-875731-m02:/home/docker/cp-test_multinode-875731-m03_multinode-875731-m02.txt                           │ multinode-875731     │ jenkins │ v1.37.0 │ 19 Oct 25 12:45 UTC │ 19 Oct 25 12:45 UTC │
	│ ssh     │ multinode-875731 ssh -n multinode-875731-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-875731     │ jenkins │ v1.37.0 │ 19 Oct 25 12:45 UTC │ 19 Oct 25 12:45 UTC │
	│ ssh     │ multinode-875731 ssh -n multinode-875731-m02 sudo cat /home/docker/cp-test_multinode-875731-m03_multinode-875731-m02.txt                                                            │ multinode-875731     │ jenkins │ v1.37.0 │ 19 Oct 25 12:45 UTC │ 19 Oct 25 12:45 UTC │
	│ node    │ multinode-875731 node stop m03                                                                                                                                                      │ multinode-875731     │ jenkins │ v1.37.0 │ 19 Oct 25 12:45 UTC │ 19 Oct 25 12:45 UTC │
	│ node    │ multinode-875731 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-875731     │ jenkins │ v1.37.0 │ 19 Oct 25 12:45 UTC │ 19 Oct 25 12:45 UTC │
	│ node    │ list -p multinode-875731                                                                                                                                                            │ multinode-875731     │ jenkins │ v1.37.0 │ 19 Oct 25 12:45 UTC │                     │
	│ stop    │ -p multinode-875731                                                                                                                                                                 │ multinode-875731     │ jenkins │ v1.37.0 │ 19 Oct 25 12:45 UTC │ 19 Oct 25 12:48 UTC │
	│ start   │ -p multinode-875731 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-875731     │ jenkins │ v1.37.0 │ 19 Oct 25 12:48 UTC │ 19 Oct 25 12:50 UTC │
	│ node    │ list -p multinode-875731                                                                                                                                                            │ multinode-875731     │ jenkins │ v1.37.0 │ 19 Oct 25 12:50 UTC │                     │
	│ node    │ multinode-875731 node delete m03                                                                                                                                                    │ multinode-875731     │ jenkins │ v1.37.0 │ 19 Oct 25 12:50 UTC │ 19 Oct 25 12:50 UTC │
	│ stop    │ multinode-875731 stop                                                                                                                                                               │ multinode-875731     │ jenkins │ v1.37.0 │ 19 Oct 25 12:50 UTC │ 19 Oct 25 12:53 UTC │
	│ start   │ -p multinode-875731 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-875731     │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:55 UTC │
	│ node    │ list -p multinode-875731                                                                                                                                                            │ multinode-875731     │ jenkins │ v1.37.0 │ 19 Oct 25 12:55 UTC │                     │
	│ start   │ -p multinode-875731-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-875731-m02 │ jenkins │ v1.37.0 │ 19 Oct 25 12:55 UTC │                     │
	│ start   │ -p multinode-875731-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-875731-m03 │ jenkins │ v1.37.0 │ 19 Oct 25 12:55 UTC │ 19 Oct 25 12:55 UTC │
	│ node    │ add -p multinode-875731                                                                                                                                                             │ multinode-875731     │ jenkins │ v1.37.0 │ 19 Oct 25 12:55 UTC │                     │
	│ delete  │ -p multinode-875731-m03                                                                                                                                                             │ multinode-875731-m03 │ jenkins │ v1.37.0 │ 19 Oct 25 12:55 UTC │ 19 Oct 25 12:55 UTC │
	│ delete  │ -p multinode-875731                                                                                                                                                                 │ multinode-875731     │ jenkins │ v1.37.0 │ 19 Oct 25 12:55 UTC │ 19 Oct 25 12:55 UTC │
	│ start   │ -p test-preload-621731 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-621731  │ jenkins │ v1.37.0 │ 19 Oct 25 12:55 UTC │ 19 Oct 25 12:57 UTC │
	│ image   │ test-preload-621731 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-621731  │ jenkins │ v1.37.0 │ 19 Oct 25 12:57 UTC │ 19 Oct 25 12:57 UTC │
	│ stop    │ -p test-preload-621731                                                                                                                                                              │ test-preload-621731  │ jenkins │ v1.37.0 │ 19 Oct 25 12:57 UTC │ 19 Oct 25 12:57 UTC │
	│ start   │ -p test-preload-621731 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-621731  │ jenkins │ v1.37.0 │ 19 Oct 25 12:57 UTC │ 19 Oct 25 12:58 UTC │
	│ image   │ test-preload-621731 image list                                                                                                                                                      │ test-preload-621731  │ jenkins │ v1.37.0 │ 19 Oct 25 12:58 UTC │ 19 Oct 25 12:58 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:57:39
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:57:39.459892  179153 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:57:39.460167  179153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:57:39.460177  179153 out.go:374] Setting ErrFile to fd 2...
	I1019 12:57:39.460192  179153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:57:39.460436  179153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-144655/.minikube/bin
	I1019 12:57:39.460910  179153 out.go:368] Setting JSON to false
	I1019 12:57:39.461834  179153 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5993,"bootTime":1760872666,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:57:39.461938  179153 start.go:141] virtualization: kvm guest
	I1019 12:57:39.463727  179153 out.go:179] * [test-preload-621731] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:57:39.464843  179153 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:57:39.464849  179153 notify.go:220] Checking for updates...
	I1019 12:57:39.465893  179153 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:57:39.467291  179153 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-144655/kubeconfig
	I1019 12:57:39.468413  179153 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-144655/.minikube
	I1019 12:57:39.469828  179153 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:57:39.471066  179153 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:57:39.472508  179153 config.go:182] Loaded profile config "test-preload-621731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1019 12:57:39.473004  179153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:57:39.473078  179153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:57:39.487527  179153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46139
	I1019 12:57:39.488155  179153 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:57:39.488722  179153 main.go:141] libmachine: Using API Version  1
	I1019 12:57:39.488747  179153 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:57:39.489161  179153 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:57:39.489367  179153 main.go:141] libmachine: (test-preload-621731) Calling .DriverName
	I1019 12:57:39.491145  179153 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1019 12:57:39.492319  179153 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:57:39.492673  179153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:57:39.492731  179153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:57:39.507033  179153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43621
	I1019 12:57:39.507628  179153 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:57:39.508210  179153 main.go:141] libmachine: Using API Version  1
	I1019 12:57:39.508248  179153 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:57:39.508606  179153 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:57:39.508783  179153 main.go:141] libmachine: (test-preload-621731) Calling .DriverName
	I1019 12:57:39.545357  179153 out.go:179] * Using the kvm2 driver based on existing profile
	I1019 12:57:39.546306  179153 start.go:305] selected driver: kvm2
	I1019 12:57:39.546324  179153 start.go:925] validating driver "kvm2" against &{Name:test-preload-621731 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-621731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:57:39.546452  179153 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:57:39.547255  179153 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:57:39.547362  179153 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21772-144655/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 12:57:39.563080  179153 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 12:57:39.563113  179153 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21772-144655/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 12:57:39.579325  179153 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 12:57:39.579699  179153 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:57:39.579727  179153 cni.go:84] Creating CNI manager for ""
	I1019 12:57:39.579775  179153 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 12:57:39.579825  179153 start.go:349] cluster config:
	{Name:test-preload-621731 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-621731 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:57:39.579948  179153 iso.go:125] acquiring lock: {Name:mk95990edcd162f08eff1d65580753d7d9806693 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:57:39.582155  179153 out.go:179] * Starting "test-preload-621731" primary control-plane node in "test-preload-621731" cluster
	I1019 12:57:39.583209  179153 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1019 12:57:39.694316  179153 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1019 12:57:39.694347  179153 cache.go:58] Caching tarball of preloaded images
	I1019 12:57:39.694495  179153 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1019 12:57:39.696002  179153 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1019 12:57:39.697225  179153 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1019 12:57:39.811207  179153 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1019 12:57:39.811278  179153 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21772-144655/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1019 12:57:51.217494  179153 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1019 12:57:51.217653  179153 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/test-preload-621731/config.json ...
	I1019 12:57:51.217902  179153 start.go:360] acquireMachinesLock for test-preload-621731: {Name:mk205e9aa7c82fb04c974fad7345827c2806baf1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1019 12:57:51.217982  179153 start.go:364] duration metric: took 52.719µs to acquireMachinesLock for "test-preload-621731"
	I1019 12:57:51.218005  179153 start.go:96] Skipping create...Using existing machine configuration
	I1019 12:57:51.218016  179153 fix.go:54] fixHost starting: 
	I1019 12:57:51.218360  179153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:57:51.218414  179153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:57:51.232987  179153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36889
	I1019 12:57:51.233560  179153 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:57:51.234127  179153 main.go:141] libmachine: Using API Version  1
	I1019 12:57:51.234152  179153 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:57:51.234527  179153 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:57:51.234723  179153 main.go:141] libmachine: (test-preload-621731) Calling .DriverName
	I1019 12:57:51.234857  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetState
	I1019 12:57:51.236869  179153 fix.go:112] recreateIfNeeded on test-preload-621731: state=Stopped err=<nil>
	I1019 12:57:51.236905  179153 main.go:141] libmachine: (test-preload-621731) Calling .DriverName
	W1019 12:57:51.237057  179153 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 12:57:51.239562  179153 out.go:252] * Restarting existing kvm2 VM for "test-preload-621731" ...
	I1019 12:57:51.239592  179153 main.go:141] libmachine: (test-preload-621731) Calling .Start
	I1019 12:57:51.239767  179153 main.go:141] libmachine: (test-preload-621731) starting domain...
	I1019 12:57:51.239789  179153 main.go:141] libmachine: (test-preload-621731) ensuring networks are active...
	I1019 12:57:51.240592  179153 main.go:141] libmachine: (test-preload-621731) Ensuring network default is active
	I1019 12:57:51.240996  179153 main.go:141] libmachine: (test-preload-621731) Ensuring network mk-test-preload-621731 is active
	I1019 12:57:51.241477  179153 main.go:141] libmachine: (test-preload-621731) getting domain XML...
	I1019 12:57:51.242644  179153 main.go:141] libmachine: (test-preload-621731) DBG | starting domain XML:
	I1019 12:57:51.242670  179153 main.go:141] libmachine: (test-preload-621731) DBG | <domain type='kvm'>
	I1019 12:57:51.242681  179153 main.go:141] libmachine: (test-preload-621731) DBG |   <name>test-preload-621731</name>
	I1019 12:57:51.242699  179153 main.go:141] libmachine: (test-preload-621731) DBG |   <uuid>7fc7def6-150a-4923-854f-681971690a0b</uuid>
	I1019 12:57:51.242715  179153 main.go:141] libmachine: (test-preload-621731) DBG |   <memory unit='KiB'>3145728</memory>
	I1019 12:57:51.242724  179153 main.go:141] libmachine: (test-preload-621731) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1019 12:57:51.242733  179153 main.go:141] libmachine: (test-preload-621731) DBG |   <vcpu placement='static'>2</vcpu>
	I1019 12:57:51.242739  179153 main.go:141] libmachine: (test-preload-621731) DBG |   <os>
	I1019 12:57:51.242748  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1019 12:57:51.242757  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <boot dev='cdrom'/>
	I1019 12:57:51.242767  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <boot dev='hd'/>
	I1019 12:57:51.242778  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <bootmenu enable='no'/>
	I1019 12:57:51.242800  179153 main.go:141] libmachine: (test-preload-621731) DBG |   </os>
	I1019 12:57:51.242822  179153 main.go:141] libmachine: (test-preload-621731) DBG |   <features>
	I1019 12:57:51.242866  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <acpi/>
	I1019 12:57:51.242891  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <apic/>
	I1019 12:57:51.242898  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <pae/>
	I1019 12:57:51.242902  179153 main.go:141] libmachine: (test-preload-621731) DBG |   </features>
	I1019 12:57:51.242911  179153 main.go:141] libmachine: (test-preload-621731) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1019 12:57:51.242917  179153 main.go:141] libmachine: (test-preload-621731) DBG |   <clock offset='utc'/>
	I1019 12:57:51.242924  179153 main.go:141] libmachine: (test-preload-621731) DBG |   <on_poweroff>destroy</on_poweroff>
	I1019 12:57:51.242928  179153 main.go:141] libmachine: (test-preload-621731) DBG |   <on_reboot>restart</on_reboot>
	I1019 12:57:51.242934  179153 main.go:141] libmachine: (test-preload-621731) DBG |   <on_crash>destroy</on_crash>
	I1019 12:57:51.242941  179153 main.go:141] libmachine: (test-preload-621731) DBG |   <devices>
	I1019 12:57:51.242948  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1019 12:57:51.242955  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <disk type='file' device='cdrom'>
	I1019 12:57:51.242961  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <driver name='qemu' type='raw'/>
	I1019 12:57:51.242976  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <source file='/home/jenkins/minikube-integration/21772-144655/.minikube/machines/test-preload-621731/boot2docker.iso'/>
	I1019 12:57:51.242984  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <target dev='hdc' bus='scsi'/>
	I1019 12:57:51.242988  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <readonly/>
	I1019 12:57:51.242995  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1019 12:57:51.243002  179153 main.go:141] libmachine: (test-preload-621731) DBG |     </disk>
	I1019 12:57:51.243008  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <disk type='file' device='disk'>
	I1019 12:57:51.243015  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1019 12:57:51.243025  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <source file='/home/jenkins/minikube-integration/21772-144655/.minikube/machines/test-preload-621731/test-preload-621731.rawdisk'/>
	I1019 12:57:51.243032  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <target dev='hda' bus='virtio'/>
	I1019 12:57:51.243039  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1019 12:57:51.243048  179153 main.go:141] libmachine: (test-preload-621731) DBG |     </disk>
	I1019 12:57:51.243055  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1019 12:57:51.243063  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1019 12:57:51.243069  179153 main.go:141] libmachine: (test-preload-621731) DBG |     </controller>
	I1019 12:57:51.243074  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1019 12:57:51.243082  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1019 12:57:51.243088  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1019 12:57:51.243093  179153 main.go:141] libmachine: (test-preload-621731) DBG |     </controller>
	I1019 12:57:51.243101  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <interface type='network'>
	I1019 12:57:51.243106  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <mac address='52:54:00:f5:7f:19'/>
	I1019 12:57:51.243113  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <source network='mk-test-preload-621731'/>
	I1019 12:57:51.243129  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <model type='virtio'/>
	I1019 12:57:51.243145  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1019 12:57:51.243155  179153 main.go:141] libmachine: (test-preload-621731) DBG |     </interface>
	I1019 12:57:51.243170  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <interface type='network'>
	I1019 12:57:51.243183  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <mac address='52:54:00:28:9e:77'/>
	I1019 12:57:51.243192  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <source network='default'/>
	I1019 12:57:51.243204  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <model type='virtio'/>
	I1019 12:57:51.243221  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1019 12:57:51.243232  179153 main.go:141] libmachine: (test-preload-621731) DBG |     </interface>
	I1019 12:57:51.243242  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <serial type='pty'>
	I1019 12:57:51.243251  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <target type='isa-serial' port='0'>
	I1019 12:57:51.243259  179153 main.go:141] libmachine: (test-preload-621731) DBG |         <model name='isa-serial'/>
	I1019 12:57:51.243266  179153 main.go:141] libmachine: (test-preload-621731) DBG |       </target>
	I1019 12:57:51.243276  179153 main.go:141] libmachine: (test-preload-621731) DBG |     </serial>
	I1019 12:57:51.243306  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <console type='pty'>
	I1019 12:57:51.243331  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <target type='serial' port='0'/>
	I1019 12:57:51.243348  179153 main.go:141] libmachine: (test-preload-621731) DBG |     </console>
	I1019 12:57:51.243361  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <input type='mouse' bus='ps2'/>
	I1019 12:57:51.243372  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <input type='keyboard' bus='ps2'/>
	I1019 12:57:51.243382  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <audio id='1' type='none'/>
	I1019 12:57:51.243388  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <memballoon model='virtio'>
	I1019 12:57:51.243396  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1019 12:57:51.243404  179153 main.go:141] libmachine: (test-preload-621731) DBG |     </memballoon>
	I1019 12:57:51.243413  179153 main.go:141] libmachine: (test-preload-621731) DBG |     <rng model='virtio'>
	I1019 12:57:51.243439  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <backend model='random'>/dev/random</backend>
	I1019 12:57:51.243454  179153 main.go:141] libmachine: (test-preload-621731) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1019 12:57:51.243464  179153 main.go:141] libmachine: (test-preload-621731) DBG |     </rng>
	I1019 12:57:51.243473  179153 main.go:141] libmachine: (test-preload-621731) DBG |   </devices>
	I1019 12:57:51.243483  179153 main.go:141] libmachine: (test-preload-621731) DBG | </domain>
	I1019 12:57:51.243493  179153 main.go:141] libmachine: (test-preload-621731) DBG | 
	I1019 12:57:52.503265  179153 main.go:141] libmachine: (test-preload-621731) waiting for domain to start...
	I1019 12:57:52.504601  179153 main.go:141] libmachine: (test-preload-621731) domain is now running
	I1019 12:57:52.504622  179153 main.go:141] libmachine: (test-preload-621731) waiting for IP...
	I1019 12:57:52.505445  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:57:52.505990  179153 main.go:141] libmachine: (test-preload-621731) found domain IP: 192.168.39.51
	I1019 12:57:52.506029  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has current primary IP address 192.168.39.51 and MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:57:52.506040  179153 main.go:141] libmachine: (test-preload-621731) reserving static IP address...
	I1019 12:57:52.506522  179153 main.go:141] libmachine: (test-preload-621731) DBG | found host DHCP lease matching {name: "test-preload-621731", mac: "52:54:00:f5:7f:19", ip: "192.168.39.51"} in network mk-test-preload-621731: {Iface:virbr1 ExpiryTime:2025-10-19 13:56:08 +0000 UTC Type:0 Mac:52:54:00:f5:7f:19 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:test-preload-621731 Clientid:01:52:54:00:f5:7f:19}
	I1019 12:57:52.506560  179153 main.go:141] libmachine: (test-preload-621731) reserved static IP address 192.168.39.51 for domain test-preload-621731
	I1019 12:57:52.506578  179153 main.go:141] libmachine: (test-preload-621731) DBG | skip adding static IP to network mk-test-preload-621731 - found existing host DHCP lease matching {name: "test-preload-621731", mac: "52:54:00:f5:7f:19", ip: "192.168.39.51"}
	I1019 12:57:52.506589  179153 main.go:141] libmachine: (test-preload-621731) waiting for SSH...
	I1019 12:57:52.506610  179153 main.go:141] libmachine: (test-preload-621731) DBG | Getting to WaitForSSH function...
	I1019 12:57:52.508695  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:57:52.509072  179153 main.go:141] libmachine: (test-preload-621731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:7f:19", ip: ""} in network mk-test-preload-621731: {Iface:virbr1 ExpiryTime:2025-10-19 13:56:08 +0000 UTC Type:0 Mac:52:54:00:f5:7f:19 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:test-preload-621731 Clientid:01:52:54:00:f5:7f:19}
	I1019 12:57:52.509109  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined IP address 192.168.39.51 and MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:57:52.509235  179153 main.go:141] libmachine: (test-preload-621731) DBG | Using SSH client type: external
	I1019 12:57:52.509304  179153 main.go:141] libmachine: (test-preload-621731) DBG | Using SSH private key: /home/jenkins/minikube-integration/21772-144655/.minikube/machines/test-preload-621731/id_rsa (-rw-------)
	I1019 12:57:52.509347  179153 main.go:141] libmachine: (test-preload-621731) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21772-144655/.minikube/machines/test-preload-621731/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1019 12:57:52.509366  179153 main.go:141] libmachine: (test-preload-621731) DBG | About to run SSH command:
	I1019 12:57:52.509387  179153 main.go:141] libmachine: (test-preload-621731) DBG | exit 0
	I1019 12:58:02.739127  179153 main.go:141] libmachine: (test-preload-621731) DBG | SSH cmd err, output: exit status 255: 
	I1019 12:58:02.739175  179153 main.go:141] libmachine: (test-preload-621731) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1019 12:58:02.739193  179153 main.go:141] libmachine: (test-preload-621731) DBG | command : exit 0
	I1019 12:58:02.739203  179153 main.go:141] libmachine: (test-preload-621731) DBG | err     : exit status 255
	I1019 12:58:02.739217  179153 main.go:141] libmachine: (test-preload-621731) DBG | output  : 
	I1019 12:58:05.741251  179153 main.go:141] libmachine: (test-preload-621731) DBG | Getting to WaitForSSH function...
	I1019 12:58:05.744559  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:05.745040  179153 main.go:141] libmachine: (test-preload-621731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:7f:19", ip: ""} in network mk-test-preload-621731: {Iface:virbr1 ExpiryTime:2025-10-19 13:58:02 +0000 UTC Type:0 Mac:52:54:00:f5:7f:19 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:test-preload-621731 Clientid:01:52:54:00:f5:7f:19}
	I1019 12:58:05.745078  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined IP address 192.168.39.51 and MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:05.745260  179153 main.go:141] libmachine: (test-preload-621731) DBG | Using SSH client type: external
	I1019 12:58:05.745306  179153 main.go:141] libmachine: (test-preload-621731) DBG | Using SSH private key: /home/jenkins/minikube-integration/21772-144655/.minikube/machines/test-preload-621731/id_rsa (-rw-------)
	I1019 12:58:05.745329  179153 main.go:141] libmachine: (test-preload-621731) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21772-144655/.minikube/machines/test-preload-621731/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1019 12:58:05.745339  179153 main.go:141] libmachine: (test-preload-621731) DBG | About to run SSH command:
	I1019 12:58:05.745351  179153 main.go:141] libmachine: (test-preload-621731) DBG | exit 0
	I1019 12:58:05.870953  179153 main.go:141] libmachine: (test-preload-621731) DBG | SSH cmd err, output: <nil>: 
	I1019 12:58:05.871414  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetConfigRaw
	I1019 12:58:05.872094  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetIP
	I1019 12:58:05.874970  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:05.875378  179153 main.go:141] libmachine: (test-preload-621731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:7f:19", ip: ""} in network mk-test-preload-621731: {Iface:virbr1 ExpiryTime:2025-10-19 13:58:02 +0000 UTC Type:0 Mac:52:54:00:f5:7f:19 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:test-preload-621731 Clientid:01:52:54:00:f5:7f:19}
	I1019 12:58:05.875411  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined IP address 192.168.39.51 and MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:05.875649  179153 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/test-preload-621731/config.json ...
	I1019 12:58:05.875850  179153 machine.go:93] provisionDockerMachine start ...
	I1019 12:58:05.875871  179153 main.go:141] libmachine: (test-preload-621731) Calling .DriverName
	I1019 12:58:05.876118  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHHostname
	I1019 12:58:05.878910  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:05.879313  179153 main.go:141] libmachine: (test-preload-621731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:7f:19", ip: ""} in network mk-test-preload-621731: {Iface:virbr1 ExpiryTime:2025-10-19 13:58:02 +0000 UTC Type:0 Mac:52:54:00:f5:7f:19 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:test-preload-621731 Clientid:01:52:54:00:f5:7f:19}
	I1019 12:58:05.879372  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined IP address 192.168.39.51 and MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:05.879497  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHPort
	I1019 12:58:05.879674  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHKeyPath
	I1019 12:58:05.879828  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHKeyPath
	I1019 12:58:05.879970  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHUsername
	I1019 12:58:05.880145  179153 main.go:141] libmachine: Using SSH client type: native
	I1019 12:58:05.880511  179153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1019 12:58:05.880525  179153 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:58:05.981994  179153 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1019 12:58:05.982037  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetMachineName
	I1019 12:58:05.982366  179153 buildroot.go:166] provisioning hostname "test-preload-621731"
	I1019 12:58:05.982393  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetMachineName
	I1019 12:58:05.982640  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHHostname
	I1019 12:58:05.985563  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:05.985893  179153 main.go:141] libmachine: (test-preload-621731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:7f:19", ip: ""} in network mk-test-preload-621731: {Iface:virbr1 ExpiryTime:2025-10-19 13:58:02 +0000 UTC Type:0 Mac:52:54:00:f5:7f:19 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:test-preload-621731 Clientid:01:52:54:00:f5:7f:19}
	I1019 12:58:05.985933  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined IP address 192.168.39.51 and MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:05.986095  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHPort
	I1019 12:58:05.986309  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHKeyPath
	I1019 12:58:05.986477  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHKeyPath
	I1019 12:58:05.986622  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHUsername
	I1019 12:58:05.986785  179153 main.go:141] libmachine: Using SSH client type: native
	I1019 12:58:05.987041  179153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1019 12:58:05.987059  179153 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-621731 && echo "test-preload-621731" | sudo tee /etc/hostname
	I1019 12:58:06.103653  179153 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-621731
	
	I1019 12:58:06.103695  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHHostname
	I1019 12:58:06.106855  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:06.107254  179153 main.go:141] libmachine: (test-preload-621731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:7f:19", ip: ""} in network mk-test-preload-621731: {Iface:virbr1 ExpiryTime:2025-10-19 13:58:02 +0000 UTC Type:0 Mac:52:54:00:f5:7f:19 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:test-preload-621731 Clientid:01:52:54:00:f5:7f:19}
	I1019 12:58:06.107302  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined IP address 192.168.39.51 and MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:06.107504  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHPort
	I1019 12:58:06.107722  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHKeyPath
	I1019 12:58:06.107874  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHKeyPath
	I1019 12:58:06.107975  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHUsername
	I1019 12:58:06.108122  179153 main.go:141] libmachine: Using SSH client type: native
	I1019 12:58:06.108383  179153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1019 12:58:06.108401  179153 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-621731' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-621731/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-621731' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:58:06.217962  179153 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:58:06.218000  179153 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21772-144655/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-144655/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-144655/.minikube}
	I1019 12:58:06.218055  179153 buildroot.go:174] setting up certificates
	I1019 12:58:06.218072  179153 provision.go:84] configureAuth start
	I1019 12:58:06.218091  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetMachineName
	I1019 12:58:06.218550  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetIP
	I1019 12:58:06.221682  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:06.222105  179153 main.go:141] libmachine: (test-preload-621731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:7f:19", ip: ""} in network mk-test-preload-621731: {Iface:virbr1 ExpiryTime:2025-10-19 13:58:02 +0000 UTC Type:0 Mac:52:54:00:f5:7f:19 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:test-preload-621731 Clientid:01:52:54:00:f5:7f:19}
	I1019 12:58:06.222137  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined IP address 192.168.39.51 and MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:06.222323  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHHostname
	I1019 12:58:06.224960  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:06.225355  179153 main.go:141] libmachine: (test-preload-621731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:7f:19", ip: ""} in network mk-test-preload-621731: {Iface:virbr1 ExpiryTime:2025-10-19 13:58:02 +0000 UTC Type:0 Mac:52:54:00:f5:7f:19 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:test-preload-621731 Clientid:01:52:54:00:f5:7f:19}
	I1019 12:58:06.225387  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined IP address 192.168.39.51 and MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:06.225565  179153 provision.go:143] copyHostCerts
	I1019 12:58:06.225628  179153 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-144655/.minikube/ca.pem, removing ...
	I1019 12:58:06.225651  179153 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-144655/.minikube/ca.pem
	I1019 12:58:06.225744  179153 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-144655/.minikube/ca.pem (1078 bytes)
	I1019 12:58:06.225949  179153 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-144655/.minikube/cert.pem, removing ...
	I1019 12:58:06.225963  179153 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-144655/.minikube/cert.pem
	I1019 12:58:06.225997  179153 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-144655/.minikube/cert.pem (1123 bytes)
	I1019 12:58:06.226153  179153 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-144655/.minikube/key.pem, removing ...
	I1019 12:58:06.226162  179153 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-144655/.minikube/key.pem
	I1019 12:58:06.226198  179153 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-144655/.minikube/key.pem (1675 bytes)
	I1019 12:58:06.226275  179153 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-144655/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca-key.pem org=jenkins.test-preload-621731 san=[127.0.0.1 192.168.39.51 localhost minikube test-preload-621731]
	I1019 12:58:06.476768  179153 provision.go:177] copyRemoteCerts
	I1019 12:58:06.476840  179153 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:58:06.476870  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHHostname
	I1019 12:58:06.480063  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:06.480486  179153 main.go:141] libmachine: (test-preload-621731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:7f:19", ip: ""} in network mk-test-preload-621731: {Iface:virbr1 ExpiryTime:2025-10-19 13:58:02 +0000 UTC Type:0 Mac:52:54:00:f5:7f:19 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:test-preload-621731 Clientid:01:52:54:00:f5:7f:19}
	I1019 12:58:06.480514  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined IP address 192.168.39.51 and MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:06.480720  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHPort
	I1019 12:58:06.480920  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHKeyPath
	I1019 12:58:06.481058  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHUsername
	I1019 12:58:06.481198  179153 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/test-preload-621731/id_rsa Username:docker}
	I1019 12:58:06.561786  179153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1019 12:58:06.592661  179153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 12:58:06.623144  179153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1019 12:58:06.653535  179153 provision.go:87] duration metric: took 435.442069ms to configureAuth
	I1019 12:58:06.653576  179153 buildroot.go:189] setting minikube options for container-runtime
	I1019 12:58:06.653800  179153 config.go:182] Loaded profile config "test-preload-621731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1019 12:58:06.653885  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHHostname
	I1019 12:58:06.657211  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:06.657685  179153 main.go:141] libmachine: (test-preload-621731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:7f:19", ip: ""} in network mk-test-preload-621731: {Iface:virbr1 ExpiryTime:2025-10-19 13:58:02 +0000 UTC Type:0 Mac:52:54:00:f5:7f:19 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:test-preload-621731 Clientid:01:52:54:00:f5:7f:19}
	I1019 12:58:06.657760  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined IP address 192.168.39.51 and MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:06.657872  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHPort
	I1019 12:58:06.658090  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHKeyPath
	I1019 12:58:06.658254  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHKeyPath
	I1019 12:58:06.658450  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHUsername
	I1019 12:58:06.658649  179153 main.go:141] libmachine: Using SSH client type: native
	I1019 12:58:06.658859  179153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1019 12:58:06.658875  179153 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:58:06.896577  179153 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:58:06.896630  179153 machine.go:96] duration metric: took 1.020765461s to provisionDockerMachine
	I1019 12:58:06.896648  179153 start.go:293] postStartSetup for "test-preload-621731" (driver="kvm2")
	I1019 12:58:06.896663  179153 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:58:06.896698  179153 main.go:141] libmachine: (test-preload-621731) Calling .DriverName
	I1019 12:58:06.897068  179153 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:58:06.897113  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHHostname
	I1019 12:58:06.900392  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:06.900778  179153 main.go:141] libmachine: (test-preload-621731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:7f:19", ip: ""} in network mk-test-preload-621731: {Iface:virbr1 ExpiryTime:2025-10-19 13:58:02 +0000 UTC Type:0 Mac:52:54:00:f5:7f:19 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:test-preload-621731 Clientid:01:52:54:00:f5:7f:19}
	I1019 12:58:06.900807  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined IP address 192.168.39.51 and MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:06.900986  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHPort
	I1019 12:58:06.901227  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHKeyPath
	I1019 12:58:06.901392  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHUsername
	I1019 12:58:06.901580  179153 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/test-preload-621731/id_rsa Username:docker}
	I1019 12:58:06.983232  179153 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:58:06.988306  179153 info.go:137] Remote host: Buildroot 2025.02
	I1019 12:58:06.988330  179153 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-144655/.minikube/addons for local assets ...
	I1019 12:58:06.988414  179153 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-144655/.minikube/files for local assets ...
	I1019 12:58:06.988487  179153 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-144655/.minikube/files/etc/ssl/certs/1487012.pem -> 1487012.pem in /etc/ssl/certs
	I1019 12:58:06.988571  179153 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:58:07.000810  179153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/files/etc/ssl/certs/1487012.pem --> /etc/ssl/certs/1487012.pem (1708 bytes)
	I1019 12:58:07.029251  179153 start.go:296] duration metric: took 132.580949ms for postStartSetup
	I1019 12:58:07.029310  179153 fix.go:56] duration metric: took 15.811295582s for fixHost
	I1019 12:58:07.029334  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHHostname
	I1019 12:58:07.032246  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:07.032634  179153 main.go:141] libmachine: (test-preload-621731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:7f:19", ip: ""} in network mk-test-preload-621731: {Iface:virbr1 ExpiryTime:2025-10-19 13:58:02 +0000 UTC Type:0 Mac:52:54:00:f5:7f:19 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:test-preload-621731 Clientid:01:52:54:00:f5:7f:19}
	I1019 12:58:07.032676  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined IP address 192.168.39.51 and MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:07.032809  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHPort
	I1019 12:58:07.033033  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHKeyPath
	I1019 12:58:07.033200  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHKeyPath
	I1019 12:58:07.033370  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHUsername
	I1019 12:58:07.033554  179153 main.go:141] libmachine: Using SSH client type: native
	I1019 12:58:07.033762  179153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1019 12:58:07.033774  179153 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1019 12:58:07.133823  179153 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760878687.089138006
	
	I1019 12:58:07.133862  179153 fix.go:216] guest clock: 1760878687.089138006
	I1019 12:58:07.133873  179153 fix.go:229] Guest: 2025-10-19 12:58:07.089138006 +0000 UTC Remote: 2025-10-19 12:58:07.029315137 +0000 UTC m=+27.610153363 (delta=59.822869ms)
	I1019 12:58:07.133920  179153 fix.go:200] guest clock delta is within tolerance: 59.822869ms
	I1019 12:58:07.133927  179153 start.go:83] releasing machines lock for "test-preload-621731", held for 15.91593131s
	I1019 12:58:07.133961  179153 main.go:141] libmachine: (test-preload-621731) Calling .DriverName
	I1019 12:58:07.134237  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetIP
	I1019 12:58:07.137243  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:07.137596  179153 main.go:141] libmachine: (test-preload-621731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:7f:19", ip: ""} in network mk-test-preload-621731: {Iface:virbr1 ExpiryTime:2025-10-19 13:58:02 +0000 UTC Type:0 Mac:52:54:00:f5:7f:19 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:test-preload-621731 Clientid:01:52:54:00:f5:7f:19}
	I1019 12:58:07.137625  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined IP address 192.168.39.51 and MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:07.137769  179153 main.go:141] libmachine: (test-preload-621731) Calling .DriverName
	I1019 12:58:07.138378  179153 main.go:141] libmachine: (test-preload-621731) Calling .DriverName
	I1019 12:58:07.138586  179153 main.go:141] libmachine: (test-preload-621731) Calling .DriverName
	I1019 12:58:07.138728  179153 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:58:07.138792  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHHostname
	I1019 12:58:07.138809  179153 ssh_runner.go:195] Run: cat /version.json
	I1019 12:58:07.138828  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHHostname
	I1019 12:58:07.141950  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:07.141981  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:07.142432  179153 main.go:141] libmachine: (test-preload-621731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:7f:19", ip: ""} in network mk-test-preload-621731: {Iface:virbr1 ExpiryTime:2025-10-19 13:58:02 +0000 UTC Type:0 Mac:52:54:00:f5:7f:19 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:test-preload-621731 Clientid:01:52:54:00:f5:7f:19}
	I1019 12:58:07.142467  179153 main.go:141] libmachine: (test-preload-621731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:7f:19", ip: ""} in network mk-test-preload-621731: {Iface:virbr1 ExpiryTime:2025-10-19 13:58:02 +0000 UTC Type:0 Mac:52:54:00:f5:7f:19 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:test-preload-621731 Clientid:01:52:54:00:f5:7f:19}
	I1019 12:58:07.142499  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined IP address 192.168.39.51 and MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:07.142519  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined IP address 192.168.39.51 and MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:07.142688  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHPort
	I1019 12:58:07.142901  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHPort
	I1019 12:58:07.142909  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHKeyPath
	I1019 12:58:07.143120  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHKeyPath
	I1019 12:58:07.143130  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHUsername
	I1019 12:58:07.143271  179153 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/test-preload-621731/id_rsa Username:docker}
	I1019 12:58:07.143323  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHUsername
	I1019 12:58:07.143471  179153 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/test-preload-621731/id_rsa Username:docker}
	I1019 12:58:07.246361  179153 ssh_runner.go:195] Run: systemctl --version
	I1019 12:58:07.253133  179153 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:58:07.400784  179153 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:58:07.407454  179153 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:58:07.407547  179153 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:58:07.427170  179153 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 12:58:07.427204  179153 start.go:495] detecting cgroup driver to use...
	I1019 12:58:07.427274  179153 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:58:07.446750  179153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:58:07.465359  179153 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:58:07.465453  179153 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:58:07.482578  179153 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:58:07.499279  179153 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:58:07.641824  179153 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:58:07.859410  179153 docker.go:234] disabling docker service ...
	I1019 12:58:07.859478  179153 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:58:07.875869  179153 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:58:07.890547  179153 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:58:08.043879  179153 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:58:08.190996  179153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:58:08.207066  179153 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:58:08.229583  179153 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1019 12:58:08.229651  179153 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:58:08.241937  179153 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 12:58:08.242034  179153 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:58:08.254377  179153 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:58:08.266744  179153 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:58:08.278808  179153 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:58:08.291361  179153 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:58:08.303374  179153 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:58:08.324985  179153 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:58:08.337660  179153 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:58:08.348013  179153 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1019 12:58:08.348074  179153 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1019 12:58:08.367695  179153 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:58:08.379359  179153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:58:08.515051  179153 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:58:08.628004  179153 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:58:08.628082  179153 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:58:08.633847  179153 start.go:563] Will wait 60s for crictl version
	I1019 12:58:08.633908  179153 ssh_runner.go:195] Run: which crictl
	I1019 12:58:08.638258  179153 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1019 12:58:08.682899  179153 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1019 12:58:08.683005  179153 ssh_runner.go:195] Run: crio --version
	I1019 12:58:08.714577  179153 ssh_runner.go:195] Run: crio --version
	I1019 12:58:08.744943  179153 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1019 12:58:08.746025  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetIP
	I1019 12:58:08.749103  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:08.749581  179153 main.go:141] libmachine: (test-preload-621731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:7f:19", ip: ""} in network mk-test-preload-621731: {Iface:virbr1 ExpiryTime:2025-10-19 13:58:02 +0000 UTC Type:0 Mac:52:54:00:f5:7f:19 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:test-preload-621731 Clientid:01:52:54:00:f5:7f:19}
	I1019 12:58:08.749609  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined IP address 192.168.39.51 and MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:08.749853  179153 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1019 12:58:08.757142  179153 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:58:08.772307  179153 kubeadm.go:883] updating cluster {Name:test-preload-621731 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-621731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:58:08.772450  179153 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1019 12:58:08.772498  179153 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:58:08.810415  179153 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1019 12:58:08.810490  179153 ssh_runner.go:195] Run: which lz4
	I1019 12:58:08.815974  179153 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1019 12:58:08.820503  179153 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1019 12:58:08.820536  179153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1019 12:58:10.282840  179153 crio.go:462] duration metric: took 1.466926758s to copy over tarball
	I1019 12:58:10.282952  179153 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1019 12:58:11.971817  179153 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.688824887s)
	I1019 12:58:11.971851  179153 crio.go:469] duration metric: took 1.688977185s to extract the tarball
	I1019 12:58:11.971859  179153 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1019 12:58:12.012877  179153 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:58:12.059807  179153 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:58:12.059837  179153 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:58:12.059845  179153 kubeadm.go:934] updating node { 192.168.39.51 8443 v1.32.0 crio true true} ...
	I1019 12:58:12.059961  179153 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-621731 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-621731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:58:12.060040  179153 ssh_runner.go:195] Run: crio config
	I1019 12:58:12.109160  179153 cni.go:84] Creating CNI manager for ""
	I1019 12:58:12.109184  179153 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 12:58:12.109205  179153 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 12:58:12.109242  179153 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.51 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-621731 NodeName:test-preload-621731 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:58:12.109390  179153 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-621731"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.51"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.51"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:58:12.109455  179153 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1019 12:58:12.122187  179153 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:58:12.122258  179153 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:58:12.134695  179153 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1019 12:58:12.158347  179153 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:58:12.181913  179153 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1019 12:58:12.206161  179153 ssh_runner.go:195] Run: grep 192.168.39.51	control-plane.minikube.internal$ /etc/hosts
	I1019 12:58:12.210597  179153 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:58:12.226156  179153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:58:12.376959  179153 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:58:12.418396  179153 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/test-preload-621731 for IP: 192.168.39.51
	I1019 12:58:12.418427  179153 certs.go:195] generating shared ca certs ...
	I1019 12:58:12.418448  179153 certs.go:227] acquiring lock for ca certs: {Name:mk3746b9a64228b33b458f684a19c91de0767499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:58:12.418658  179153 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-144655/.minikube/ca.key
	I1019 12:58:12.418710  179153 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-144655/.minikube/proxy-client-ca.key
	I1019 12:58:12.418726  179153 certs.go:257] generating profile certs ...
	I1019 12:58:12.418851  179153 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/test-preload-621731/client.key
	I1019 12:58:12.418941  179153 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/test-preload-621731/apiserver.key.7de820e2
	I1019 12:58:12.418995  179153 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/test-preload-621731/proxy-client.key
	I1019 12:58:12.419142  179153 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/148701.pem (1338 bytes)
	W1019 12:58:12.419194  179153 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-144655/.minikube/certs/148701_empty.pem, impossibly tiny 0 bytes
	I1019 12:58:12.419207  179153 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:58:12.419239  179153 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca.pem (1078 bytes)
	I1019 12:58:12.419268  179153 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:58:12.419315  179153 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/key.pem (1675 bytes)
	I1019 12:58:12.419369  179153 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-144655/.minikube/files/etc/ssl/certs/1487012.pem (1708 bytes)
	I1019 12:58:12.420073  179153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:58:12.456865  179153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:58:12.499321  179153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:58:12.529047  179153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:58:12.559451  179153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/test-preload-621731/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1019 12:58:12.589422  179153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/test-preload-621731/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 12:58:12.619045  179153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/test-preload-621731/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:58:12.649561  179153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/test-preload-621731/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:58:12.679498  179153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/certs/148701.pem --> /usr/share/ca-certificates/148701.pem (1338 bytes)
	I1019 12:58:12.708570  179153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/files/etc/ssl/certs/1487012.pem --> /usr/share/ca-certificates/1487012.pem (1708 bytes)
	I1019 12:58:12.737780  179153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:58:12.766570  179153 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:58:12.787118  179153 ssh_runner.go:195] Run: openssl version
	I1019 12:58:12.793833  179153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1487012.pem && ln -fs /usr/share/ca-certificates/1487012.pem /etc/ssl/certs/1487012.pem"
	I1019 12:58:12.807523  179153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1487012.pem
	I1019 12:58:12.812826  179153 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:15 /usr/share/ca-certificates/1487012.pem
	I1019 12:58:12.812891  179153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1487012.pem
	I1019 12:58:12.820305  179153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1487012.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:58:12.833645  179153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:58:12.846928  179153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:58:12.852175  179153 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:07 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:58:12.852255  179153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:58:12.859379  179153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:58:12.872862  179153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148701.pem && ln -fs /usr/share/ca-certificates/148701.pem /etc/ssl/certs/148701.pem"
	I1019 12:58:12.886274  179153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148701.pem
	I1019 12:58:12.892056  179153 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:15 /usr/share/ca-certificates/148701.pem
	I1019 12:58:12.892166  179153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148701.pem
	I1019 12:58:12.899921  179153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/148701.pem /etc/ssl/certs/51391683.0"
	I1019 12:58:12.913686  179153 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:58:12.919236  179153 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 12:58:12.926980  179153 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 12:58:12.934515  179153 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 12:58:12.942207  179153 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 12:58:12.949765  179153 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 12:58:12.957259  179153 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 12:58:12.964900  179153 kubeadm.go:400] StartCluster: {Name:test-preload-621731 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-621731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:58:12.964989  179153 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:58:12.965041  179153 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:58:13.003367  179153 cri.go:89] found id: ""
	I1019 12:58:13.003453  179153 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:58:13.015905  179153 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 12:58:13.015938  179153 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 12:58:13.015999  179153 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 12:58:13.028324  179153 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:58:13.028775  179153 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-621731" does not appear in /home/jenkins/minikube-integration/21772-144655/kubeconfig
	I1019 12:58:13.028872  179153 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-144655/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-621731" cluster setting kubeconfig missing "test-preload-621731" context setting]
	I1019 12:58:13.029140  179153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-144655/kubeconfig: {Name:mka451e8e94291f8682e25e26bb194afdfe90331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:58:13.029745  179153 kapi.go:59] client config for test-preload-621731: &rest.Config{Host:"https://192.168.39.51:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-144655/.minikube/profiles/test-preload-621731/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-144655/.minikube/profiles/test-preload-621731/client.key", CAFile:"/home/jenkins/minikube-integration/21772-144655/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 12:58:13.030205  179153 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1019 12:58:13.030226  179153 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1019 12:58:13.030231  179153 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1019 12:58:13.030236  179153 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1019 12:58:13.030240  179153 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1019 12:58:13.030642  179153 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 12:58:13.042277  179153 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.51
	I1019 12:58:13.042328  179153 kubeadm.go:1160] stopping kube-system containers ...
	I1019 12:58:13.042341  179153 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1019 12:58:13.042396  179153 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:58:13.082044  179153 cri.go:89] found id: ""
	I1019 12:58:13.082133  179153 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1019 12:58:13.106165  179153 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 12:58:13.118592  179153 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 12:58:13.118615  179153 kubeadm.go:157] found existing configuration files:
	
	I1019 12:58:13.118663  179153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 12:58:13.129520  179153 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 12:58:13.129583  179153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 12:58:13.141391  179153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 12:58:13.152217  179153 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 12:58:13.152293  179153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 12:58:13.164195  179153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 12:58:13.175024  179153 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 12:58:13.175100  179153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 12:58:13.186851  179153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 12:58:13.197780  179153 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 12:58:13.197850  179153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 12:58:13.209669  179153 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 12:58:13.221295  179153 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 12:58:13.272220  179153 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 12:58:13.770200  179153 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1019 12:58:14.036438  179153 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 12:58:14.108841  179153 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1019 12:58:14.198910  179153 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:58:14.198996  179153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:58:14.699129  179153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:58:15.199532  179153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:58:15.699438  179153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:58:16.199849  179153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:58:16.699850  179153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:58:16.727181  179153 api_server.go:72] duration metric: took 2.528284874s to wait for apiserver process to appear ...
	I1019 12:58:16.727216  179153 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:58:16.727242  179153 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I1019 12:58:19.460824  179153 api_server.go:279] https://192.168.39.51:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1019 12:58:19.460862  179153 api_server.go:103] status: https://192.168.39.51:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1019 12:58:19.460890  179153 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I1019 12:58:19.544201  179153 api_server.go:279] https://192.168.39.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:58:19.544242  179153 api_server.go:103] status: https://192.168.39.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:58:19.727664  179153 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I1019 12:58:19.733195  179153 api_server.go:279] https://192.168.39.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:58:19.733228  179153 api_server.go:103] status: https://192.168.39.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:58:20.227951  179153 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I1019 12:58:20.235745  179153 api_server.go:279] https://192.168.39.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:58:20.235776  179153 api_server.go:103] status: https://192.168.39.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:58:20.727378  179153 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I1019 12:58:20.741329  179153 api_server.go:279] https://192.168.39.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:58:20.741362  179153 api_server.go:103] status: https://192.168.39.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:58:21.228071  179153 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I1019 12:58:21.232618  179153 api_server.go:279] https://192.168.39.51:8443/healthz returned 200:
	ok
	I1019 12:58:21.239100  179153 api_server.go:141] control plane version: v1.32.0
	I1019 12:58:21.239131  179153 api_server.go:131] duration metric: took 4.511906778s to wait for apiserver health ...
	I1019 12:58:21.239141  179153 cni.go:84] Creating CNI manager for ""
	I1019 12:58:21.239148  179153 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 12:58:21.241209  179153 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1019 12:58:21.242486  179153 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1019 12:58:21.262394  179153 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1019 12:58:21.296637  179153 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:58:21.304482  179153 system_pods.go:59] 7 kube-system pods found
	I1019 12:58:21.304554  179153 system_pods.go:61] "coredns-668d6bf9bc-qmdvd" [9dcfcbd3-b9cf-4ba1-acfa-faa69a9bb071] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:58:21.304569  179153 system_pods.go:61] "etcd-test-preload-621731" [4af00e21-be45-4545-93fc-3e371a4fef6c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:58:21.304580  179153 system_pods.go:61] "kube-apiserver-test-preload-621731" [1d14d515-a8d4-4317-a57b-6a0a8ee26f6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:58:21.304588  179153 system_pods.go:61] "kube-controller-manager-test-preload-621731" [a23e8e9c-ab19-476f-a76a-068bb35ffd19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:58:21.304595  179153 system_pods.go:61] "kube-proxy-w7c7c" [b1f72bb7-b544-400c-87e6-706a26b9dc92] Running
	I1019 12:58:21.304604  179153 system_pods.go:61] "kube-scheduler-test-preload-621731" [20c51897-26df-4755-8df2-8fecbe51205e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:58:21.304610  179153 system_pods.go:61] "storage-provisioner" [a8658129-3759-4db2-9c3c-eb1fcdf1cafa] Running
	I1019 12:58:21.304620  179153 system_pods.go:74] duration metric: took 7.948168ms to wait for pod list to return data ...
	I1019 12:58:21.304635  179153 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:58:21.307957  179153 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1019 12:58:21.307989  179153 node_conditions.go:123] node cpu capacity is 2
	I1019 12:58:21.308007  179153 node_conditions.go:105] duration metric: took 3.366347ms to run NodePressure ...
	I1019 12:58:21.308068  179153 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 12:58:21.563985  179153 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1019 12:58:21.572600  179153 kubeadm.go:743] kubelet initialised
	I1019 12:58:21.572629  179153 kubeadm.go:744] duration metric: took 8.605012ms waiting for restarted kubelet to initialise ...
	I1019 12:58:21.572651  179153 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 12:58:21.595032  179153 ops.go:34] apiserver oom_adj: -16
	I1019 12:58:21.595065  179153 kubeadm.go:601] duration metric: took 8.579118579s to restartPrimaryControlPlane
	I1019 12:58:21.595078  179153 kubeadm.go:402] duration metric: took 8.630190446s to StartCluster
	I1019 12:58:21.595104  179153 settings.go:142] acquiring lock: {Name:mke60a3280e21298abca03691052cdadefc62fa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:58:21.595189  179153 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-144655/kubeconfig
	I1019 12:58:21.596192  179153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-144655/kubeconfig: {Name:mka451e8e94291f8682e25e26bb194afdfe90331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:58:21.596554  179153 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:58:21.596635  179153 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:58:21.596761  179153 addons.go:69] Setting storage-provisioner=true in profile "test-preload-621731"
	I1019 12:58:21.596783  179153 addons.go:238] Setting addon storage-provisioner=true in "test-preload-621731"
	W1019 12:58:21.596797  179153 addons.go:247] addon storage-provisioner should already be in state true
	I1019 12:58:21.596794  179153 addons.go:69] Setting default-storageclass=true in profile "test-preload-621731"
	I1019 12:58:21.596814  179153 config.go:182] Loaded profile config "test-preload-621731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1019 12:58:21.596829  179153 host.go:66] Checking if "test-preload-621731" exists ...
	I1019 12:58:21.596821  179153 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-621731"
	I1019 12:58:21.597231  179153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:58:21.597274  179153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:58:21.597318  179153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:58:21.597363  179153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:58:21.598979  179153 out.go:179] * Verifying Kubernetes components...
	I1019 12:58:21.600266  179153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:58:21.612373  179153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39621
	I1019 12:58:21.612382  179153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44155
	I1019 12:58:21.613032  179153 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:58:21.613083  179153 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:58:21.613720  179153 main.go:141] libmachine: Using API Version  1
	I1019 12:58:21.613740  179153 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:58:21.613925  179153 main.go:141] libmachine: Using API Version  1
	I1019 12:58:21.613951  179153 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:58:21.614185  179153 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:58:21.614348  179153 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:58:21.614537  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetState
	I1019 12:58:21.614834  179153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:58:21.614902  179153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:58:21.617417  179153 kapi.go:59] client config for test-preload-621731: &rest.Config{Host:"https://192.168.39.51:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-144655/.minikube/profiles/test-preload-621731/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-144655/.minikube/profiles/test-preload-621731/client.key", CAFile:"/home/jenkins/minikube-integration/21772-144655/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 12:58:21.617790  179153 addons.go:238] Setting addon default-storageclass=true in "test-preload-621731"
	W1019 12:58:21.617812  179153 addons.go:247] addon default-storageclass should already be in state true
	I1019 12:58:21.617844  179153 host.go:66] Checking if "test-preload-621731" exists ...
	I1019 12:58:21.618239  179153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:58:21.618305  179153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:58:21.630944  179153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43167
	I1019 12:58:21.631565  179153 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:58:21.631888  179153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38175
	I1019 12:58:21.632080  179153 main.go:141] libmachine: Using API Version  1
	I1019 12:58:21.632106  179153 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:58:21.632355  179153 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:58:21.632554  179153 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:58:21.632870  179153 main.go:141] libmachine: Using API Version  1
	I1019 12:58:21.632893  179153 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:58:21.632933  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetState
	I1019 12:58:21.633314  179153 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:58:21.633929  179153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:58:21.633979  179153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:58:21.635200  179153 main.go:141] libmachine: (test-preload-621731) Calling .DriverName
	I1019 12:58:21.637131  179153 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:58:21.638237  179153 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:58:21.638253  179153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:58:21.638294  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHHostname
	I1019 12:58:21.641921  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:21.642515  179153 main.go:141] libmachine: (test-preload-621731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:7f:19", ip: ""} in network mk-test-preload-621731: {Iface:virbr1 ExpiryTime:2025-10-19 13:58:02 +0000 UTC Type:0 Mac:52:54:00:f5:7f:19 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:test-preload-621731 Clientid:01:52:54:00:f5:7f:19}
	I1019 12:58:21.642546  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined IP address 192.168.39.51 and MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:21.642768  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHPort
	I1019 12:58:21.642954  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHKeyPath
	I1019 12:58:21.643105  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHUsername
	I1019 12:58:21.643247  179153 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/test-preload-621731/id_rsa Username:docker}
	I1019 12:58:21.650649  179153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37863
	I1019 12:58:21.651332  179153 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:58:21.651859  179153 main.go:141] libmachine: Using API Version  1
	I1019 12:58:21.651882  179153 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:58:21.652315  179153 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:58:21.652542  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetState
	I1019 12:58:21.654678  179153 main.go:141] libmachine: (test-preload-621731) Calling .DriverName
	I1019 12:58:21.655004  179153 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:58:21.655024  179153 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:58:21.655043  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHHostname
	I1019 12:58:21.659088  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:21.659691  179153 main.go:141] libmachine: (test-preload-621731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:7f:19", ip: ""} in network mk-test-preload-621731: {Iface:virbr1 ExpiryTime:2025-10-19 13:58:02 +0000 UTC Type:0 Mac:52:54:00:f5:7f:19 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:test-preload-621731 Clientid:01:52:54:00:f5:7f:19}
	I1019 12:58:21.659721  179153 main.go:141] libmachine: (test-preload-621731) DBG | domain test-preload-621731 has defined IP address 192.168.39.51 and MAC address 52:54:00:f5:7f:19 in network mk-test-preload-621731
	I1019 12:58:21.659949  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHPort
	I1019 12:58:21.660184  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHKeyPath
	I1019 12:58:21.660405  179153 main.go:141] libmachine: (test-preload-621731) Calling .GetSSHUsername
	I1019 12:58:21.660612  179153 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/test-preload-621731/id_rsa Username:docker}
	I1019 12:58:21.865600  179153 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:58:21.887754  179153 node_ready.go:35] waiting up to 6m0s for node "test-preload-621731" to be "Ready" ...
	I1019 12:58:21.892198  179153 node_ready.go:49] node "test-preload-621731" is "Ready"
	I1019 12:58:21.892239  179153 node_ready.go:38] duration metric: took 4.441914ms for node "test-preload-621731" to be "Ready" ...
	I1019 12:58:21.892258  179153 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:58:21.892350  179153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:58:21.918588  179153 api_server.go:72] duration metric: took 321.982192ms to wait for apiserver process to appear ...
	I1019 12:58:21.918624  179153 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:58:21.918654  179153 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I1019 12:58:21.924517  179153 api_server.go:279] https://192.168.39.51:8443/healthz returned 200:
	ok
	I1019 12:58:21.925686  179153 api_server.go:141] control plane version: v1.32.0
	I1019 12:58:21.925717  179153 api_server.go:131] duration metric: took 7.084106ms to wait for apiserver health ...
	I1019 12:58:21.925729  179153 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:58:21.933077  179153 system_pods.go:59] 7 kube-system pods found
	I1019 12:58:21.933120  179153 system_pods.go:61] "coredns-668d6bf9bc-qmdvd" [9dcfcbd3-b9cf-4ba1-acfa-faa69a9bb071] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:58:21.933127  179153 system_pods.go:61] "etcd-test-preload-621731" [4af00e21-be45-4545-93fc-3e371a4fef6c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:58:21.933137  179153 system_pods.go:61] "kube-apiserver-test-preload-621731" [1d14d515-a8d4-4317-a57b-6a0a8ee26f6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:58:21.933142  179153 system_pods.go:61] "kube-controller-manager-test-preload-621731" [a23e8e9c-ab19-476f-a76a-068bb35ffd19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:58:21.933146  179153 system_pods.go:61] "kube-proxy-w7c7c" [b1f72bb7-b544-400c-87e6-706a26b9dc92] Running
	I1019 12:58:21.933159  179153 system_pods.go:61] "kube-scheduler-test-preload-621731" [20c51897-26df-4755-8df2-8fecbe51205e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:58:21.933163  179153 system_pods.go:61] "storage-provisioner" [a8658129-3759-4db2-9c3c-eb1fcdf1cafa] Running
	I1019 12:58:21.933170  179153 system_pods.go:74] duration metric: took 7.435739ms to wait for pod list to return data ...
	I1019 12:58:21.933180  179153 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:58:21.940633  179153 default_sa.go:45] found service account: "default"
	I1019 12:58:21.940662  179153 default_sa.go:55] duration metric: took 7.475719ms for default service account to be created ...
	I1019 12:58:21.940673  179153 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:58:21.950754  179153 system_pods.go:86] 7 kube-system pods found
	I1019 12:58:21.950804  179153 system_pods.go:89] "coredns-668d6bf9bc-qmdvd" [9dcfcbd3-b9cf-4ba1-acfa-faa69a9bb071] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:58:21.950818  179153 system_pods.go:89] "etcd-test-preload-621731" [4af00e21-be45-4545-93fc-3e371a4fef6c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:58:21.950832  179153 system_pods.go:89] "kube-apiserver-test-preload-621731" [1d14d515-a8d4-4317-a57b-6a0a8ee26f6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:58:21.950841  179153 system_pods.go:89] "kube-controller-manager-test-preload-621731" [a23e8e9c-ab19-476f-a76a-068bb35ffd19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:58:21.950848  179153 system_pods.go:89] "kube-proxy-w7c7c" [b1f72bb7-b544-400c-87e6-706a26b9dc92] Running
	I1019 12:58:21.950858  179153 system_pods.go:89] "kube-scheduler-test-preload-621731" [20c51897-26df-4755-8df2-8fecbe51205e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:58:21.950863  179153 system_pods.go:89] "storage-provisioner" [a8658129-3759-4db2-9c3c-eb1fcdf1cafa] Running
	I1019 12:58:21.950875  179153 system_pods.go:126] duration metric: took 10.194322ms to wait for k8s-apps to be running ...
	I1019 12:58:21.950887  179153 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:58:21.950948  179153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:58:21.971038  179153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:58:21.978831  179153 system_svc.go:56] duration metric: took 27.93386ms WaitForService to wait for kubelet
	I1019 12:58:21.978862  179153 kubeadm.go:586] duration metric: took 382.270551ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:58:21.978882  179153 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:58:21.982426  179153 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1019 12:58:21.982481  179153 node_conditions.go:123] node cpu capacity is 2
	I1019 12:58:21.982499  179153 node_conditions.go:105] duration metric: took 3.610747ms to run NodePressure ...
	I1019 12:58:21.982516  179153 start.go:241] waiting for startup goroutines ...
	I1019 12:58:22.090503  179153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:58:22.127565  179153 main.go:141] libmachine: Making call to close driver server
	I1019 12:58:22.127601  179153 main.go:141] libmachine: (test-preload-621731) Calling .Close
	I1019 12:58:22.127917  179153 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:58:22.127936  179153 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:58:22.127967  179153 main.go:141] libmachine: Making call to close driver server
	I1019 12:58:22.127979  179153 main.go:141] libmachine: (test-preload-621731) Calling .Close
	I1019 12:58:22.128262  179153 main.go:141] libmachine: (test-preload-621731) DBG | Closing plugin on server side
	I1019 12:58:22.128304  179153 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:58:22.128315  179153 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:58:22.142012  179153 main.go:141] libmachine: Making call to close driver server
	I1019 12:58:22.142039  179153 main.go:141] libmachine: (test-preload-621731) Calling .Close
	I1019 12:58:22.142392  179153 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:58:22.142415  179153 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:58:22.746000  179153 main.go:141] libmachine: Making call to close driver server
	I1019 12:58:22.746034  179153 main.go:141] libmachine: (test-preload-621731) Calling .Close
	I1019 12:58:22.746418  179153 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:58:22.746439  179153 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:58:22.746457  179153 main.go:141] libmachine: Making call to close driver server
	I1019 12:58:22.746465  179153 main.go:141] libmachine: (test-preload-621731) Calling .Close
	I1019 12:58:22.746745  179153 main.go:141] libmachine: Successfully made call to close driver server
	I1019 12:58:22.746764  179153 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 12:58:22.746774  179153 main.go:141] libmachine: (test-preload-621731) DBG | Closing plugin on server side
	I1019 12:58:22.749277  179153 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1019 12:58:22.750394  179153 addons.go:514] duration metric: took 1.153772675s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1019 12:58:22.750456  179153 start.go:246] waiting for cluster config update ...
	I1019 12:58:22.750468  179153 start.go:255] writing updated cluster config ...
	I1019 12:58:22.750698  179153 ssh_runner.go:195] Run: rm -f paused
	I1019 12:58:22.758135  179153 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:58:22.758904  179153 kapi.go:59] client config for test-preload-621731: &rest.Config{Host:"https://192.168.39.51:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-144655/.minikube/profiles/test-preload-621731/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-144655/.minikube/profiles/test-preload-621731/client.key", CAFile:"/home/jenkins/minikube-integration/21772-144655/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 12:58:22.762878  179153 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-qmdvd" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 12:58:24.767842  179153 pod_ready.go:104] pod "coredns-668d6bf9bc-qmdvd" is not "Ready", error: <nil>
	W1019 12:58:26.769455  179153 pod_ready.go:104] pod "coredns-668d6bf9bc-qmdvd" is not "Ready", error: <nil>
	W1019 12:58:28.769680  179153 pod_ready.go:104] pod "coredns-668d6bf9bc-qmdvd" is not "Ready", error: <nil>
	W1019 12:58:31.268791  179153 pod_ready.go:104] pod "coredns-668d6bf9bc-qmdvd" is not "Ready", error: <nil>
	I1019 12:58:31.769164  179153 pod_ready.go:94] pod "coredns-668d6bf9bc-qmdvd" is "Ready"
	I1019 12:58:31.769192  179153 pod_ready.go:86] duration metric: took 9.006285502s for pod "coredns-668d6bf9bc-qmdvd" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:58:31.771871  179153 pod_ready.go:83] waiting for pod "etcd-test-preload-621731" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 12:58:33.777690  179153 pod_ready.go:104] pod "etcd-test-preload-621731" is not "Ready", error: <nil>
	I1019 12:58:35.277544  179153 pod_ready.go:94] pod "etcd-test-preload-621731" is "Ready"
	I1019 12:58:35.277590  179153 pod_ready.go:86] duration metric: took 3.505690667s for pod "etcd-test-preload-621731" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:58:35.279729  179153 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-621731" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:58:35.284037  179153 pod_ready.go:94] pod "kube-apiserver-test-preload-621731" is "Ready"
	I1019 12:58:35.284073  179153 pod_ready.go:86] duration metric: took 4.308984ms for pod "kube-apiserver-test-preload-621731" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:58:35.286370  179153 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-621731" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:58:35.291010  179153 pod_ready.go:94] pod "kube-controller-manager-test-preload-621731" is "Ready"
	I1019 12:58:35.291035  179153 pod_ready.go:86] duration metric: took 4.644319ms for pod "kube-controller-manager-test-preload-621731" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:58:35.294037  179153 pod_ready.go:83] waiting for pod "kube-proxy-w7c7c" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:58:35.476122  179153 pod_ready.go:94] pod "kube-proxy-w7c7c" is "Ready"
	I1019 12:58:35.476169  179153 pod_ready.go:86] duration metric: took 182.100134ms for pod "kube-proxy-w7c7c" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:58:35.675492  179153 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-621731" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:58:36.076172  179153 pod_ready.go:94] pod "kube-scheduler-test-preload-621731" is "Ready"
	I1019 12:58:36.076211  179153 pod_ready.go:86] duration metric: took 400.679845ms for pod "kube-scheduler-test-preload-621731" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:58:36.076227  179153 pod_ready.go:40] duration metric: took 13.318040348s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:58:36.121382  179153 start.go:624] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1019 12:58:36.122920  179153 out.go:203] 
	W1019 12:58:36.124127  179153 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1019 12:58:36.125274  179153 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1019 12:58:36.126449  179153 out.go:179] * Done! kubectl is now configured to use "test-preload-621731" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.007528992Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760878717007505405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a66480b-aa91-4695-abd5-591803fc129d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.008953909Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a895f14-d73e-47e5-96e1-650680e2ceb4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.009059964Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a895f14-d73e-47e5-96e1-650680e2ceb4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.009290613Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36f0cacae7ae87695db5d1d792cd901f314400923e624016e2c4bc47b5fa3318,PodSandboxId:307c3c9367fb88d15f9dfca73b15a568e5a4dee240c58977cca0de8e13328547,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760878704154567375,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qmdvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dcfcbd3-b9cf-4ba1-acfa-faa69a9bb071,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0336727e7557eeeb65730afde9a4be6745e4fe7b5fa1be17f8cc10355061c5a,PodSandboxId:ab14f5358d9fadb7e17be80246d1460c20c9a8544d2f2f010f22df8f5af25efc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760878700676696542,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: a8658129-3759-4db2-9c3c-eb1fcdf1cafa,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225cab54071a98879b1b5a29f76b0da52c4519db28baa7ee671b89ff6ec4a274,PodSandboxId:cddf00054fd2598e2143674364a05a177a82fea3357bf18c004febcbe61754c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760878700571724863,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7c7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1
f72bb7-b544-400c-87e6-706a26b9dc92,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f2c86f353c7e26c9ca09b04a2d2fdafffe2d2922ce8002806b13da74d02a16,PodSandboxId:0e219937ea70572ac280224736a6aec3ff932ddd1a9e075a3ca15574a408124d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760878696305070933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-621731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a419c8e2ca26fa1406d90b6d83c63e,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98cb4b7fb3769e7387c5a7b2e983add5b8aa6e524f7f69a6cd1ec66814d6526a,PodSandboxId:cb5f80f73dd2f6669256c2ce2dff33a9187c574bfd58b38bdd305a50434071a7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760878696279093049,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-621731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd2614cfc413ccde02ebb7db28f5229f,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc5cce59f83900e954f210fdf35a4196e3f8d91ebb5b77cbc9bb0637603fed20,PodSandboxId:6040904203d06b61d314fcbd49d6dde0b2f05a57b32b68ac4e3dd098bc930388,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760878696273167443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-621731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2be9f2bee982d59f15552f025ffcf93c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690d5b9a695e0b769b491b63f295785597c7677ad8b361615e7e9a85217c451e,PodSandboxId:a3a0a3ee412b192b3258e4c942a586d8c80a33001a85612319211e2c89bfa883,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760878696263577047,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-621731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bcce758218b478d1729e0b36d944004,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a895f14-d73e-47e5-96e1-650680e2ceb4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.047098432Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0fd43914-7eae-47e3-a92e-124bbf9337c7 name=/runtime.v1.RuntimeService/Version
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.047169316Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0fd43914-7eae-47e3-a92e-124bbf9337c7 name=/runtime.v1.RuntimeService/Version
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.048759775Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3b4a28c2-e9a3-44f3-898e-d442ea1c9e3f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.049237865Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760878717049212983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3b4a28c2-e9a3-44f3-898e-d442ea1c9e3f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.049799965Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c9c0a52-0afb-4738-bcde-a4ec349a4eb6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.049862435Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c9c0a52-0afb-4738-bcde-a4ec349a4eb6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.050076818Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36f0cacae7ae87695db5d1d792cd901f314400923e624016e2c4bc47b5fa3318,PodSandboxId:307c3c9367fb88d15f9dfca73b15a568e5a4dee240c58977cca0de8e13328547,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760878704154567375,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qmdvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dcfcbd3-b9cf-4ba1-acfa-faa69a9bb071,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0336727e7557eeeb65730afde9a4be6745e4fe7b5fa1be17f8cc10355061c5a,PodSandboxId:ab14f5358d9fadb7e17be80246d1460c20c9a8544d2f2f010f22df8f5af25efc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760878700676696542,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: a8658129-3759-4db2-9c3c-eb1fcdf1cafa,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225cab54071a98879b1b5a29f76b0da52c4519db28baa7ee671b89ff6ec4a274,PodSandboxId:cddf00054fd2598e2143674364a05a177a82fea3357bf18c004febcbe61754c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760878700571724863,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7c7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1
f72bb7-b544-400c-87e6-706a26b9dc92,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f2c86f353c7e26c9ca09b04a2d2fdafffe2d2922ce8002806b13da74d02a16,PodSandboxId:0e219937ea70572ac280224736a6aec3ff932ddd1a9e075a3ca15574a408124d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760878696305070933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-621731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a419c8e2ca26fa1406d90b6d83c63e,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98cb4b7fb3769e7387c5a7b2e983add5b8aa6e524f7f69a6cd1ec66814d6526a,PodSandboxId:cb5f80f73dd2f6669256c2ce2dff33a9187c574bfd58b38bdd305a50434071a7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760878696279093049,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-621731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd2614cfc413ccde02ebb7db28f5229f,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc5cce59f83900e954f210fdf35a4196e3f8d91ebb5b77cbc9bb0637603fed20,PodSandboxId:6040904203d06b61d314fcbd49d6dde0b2f05a57b32b68ac4e3dd098bc930388,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760878696273167443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-621731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2be9f2bee982d59f15552f025ffcf93c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690d5b9a695e0b769b491b63f295785597c7677ad8b361615e7e9a85217c451e,PodSandboxId:a3a0a3ee412b192b3258e4c942a586d8c80a33001a85612319211e2c89bfa883,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760878696263577047,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-621731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bcce758218b478d1729e0b36d944004,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c9c0a52-0afb-4738-bcde-a4ec349a4eb6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.088671832Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=368284ea-f85a-48ff-b881-17100ad20f93 name=/runtime.v1.RuntimeService/Version
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.088759110Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=368284ea-f85a-48ff-b881-17100ad20f93 name=/runtime.v1.RuntimeService/Version
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.089808909Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b88ea23-053c-48ad-aaba-4769fddfdb01 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.090310905Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760878717090290086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b88ea23-053c-48ad-aaba-4769fddfdb01 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.090805630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6aaa7914-bf6c-4c4d-b1c4-2fd14cd6794f name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.090868237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6aaa7914-bf6c-4c4d-b1c4-2fd14cd6794f name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.091075291Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36f0cacae7ae87695db5d1d792cd901f314400923e624016e2c4bc47b5fa3318,PodSandboxId:307c3c9367fb88d15f9dfca73b15a568e5a4dee240c58977cca0de8e13328547,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760878704154567375,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qmdvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dcfcbd3-b9cf-4ba1-acfa-faa69a9bb071,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0336727e7557eeeb65730afde9a4be6745e4fe7b5fa1be17f8cc10355061c5a,PodSandboxId:ab14f5358d9fadb7e17be80246d1460c20c9a8544d2f2f010f22df8f5af25efc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760878700676696542,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: a8658129-3759-4db2-9c3c-eb1fcdf1cafa,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225cab54071a98879b1b5a29f76b0da52c4519db28baa7ee671b89ff6ec4a274,PodSandboxId:cddf00054fd2598e2143674364a05a177a82fea3357bf18c004febcbe61754c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760878700571724863,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7c7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1
f72bb7-b544-400c-87e6-706a26b9dc92,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f2c86f353c7e26c9ca09b04a2d2fdafffe2d2922ce8002806b13da74d02a16,PodSandboxId:0e219937ea70572ac280224736a6aec3ff932ddd1a9e075a3ca15574a408124d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760878696305070933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-621731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a419c8e2ca26fa1406d90b6d83c63e,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98cb4b7fb3769e7387c5a7b2e983add5b8aa6e524f7f69a6cd1ec66814d6526a,PodSandboxId:cb5f80f73dd2f6669256c2ce2dff33a9187c574bfd58b38bdd305a50434071a7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760878696279093049,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-621731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd2614cfc413ccde02ebb7db28f5229f,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc5cce59f83900e954f210fdf35a4196e3f8d91ebb5b77cbc9bb0637603fed20,PodSandboxId:6040904203d06b61d314fcbd49d6dde0b2f05a57b32b68ac4e3dd098bc930388,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760878696273167443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-621731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2be9f2bee982d59f15552f025ffcf93c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690d5b9a695e0b769b491b63f295785597c7677ad8b361615e7e9a85217c451e,PodSandboxId:a3a0a3ee412b192b3258e4c942a586d8c80a33001a85612319211e2c89bfa883,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760878696263577047,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-621731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bcce758218b478d1729e0b36d944004,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6aaa7914-bf6c-4c4d-b1c4-2fd14cd6794f name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.126115689Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=deaa8cb1-454a-4782-8aa4-b39d59ec814d name=/runtime.v1.RuntimeService/Version
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.126184751Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=deaa8cb1-454a-4782-8aa4-b39d59ec814d name=/runtime.v1.RuntimeService/Version
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.127500510Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1938af9d-5b9f-455b-b251-380f3b1a7f19 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.128274236Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760878717128249779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1938af9d-5b9f-455b-b251-380f3b1a7f19 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.129067139Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e81ea36-535f-41af-850a-4dafc9c61c06 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.129179850Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e81ea36-535f-41af-850a-4dafc9c61c06 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 12:58:37 test-preload-621731 crio[829]: time="2025-10-19 12:58:37.129373617Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36f0cacae7ae87695db5d1d792cd901f314400923e624016e2c4bc47b5fa3318,PodSandboxId:307c3c9367fb88d15f9dfca73b15a568e5a4dee240c58977cca0de8e13328547,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760878704154567375,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qmdvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dcfcbd3-b9cf-4ba1-acfa-faa69a9bb071,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0336727e7557eeeb65730afde9a4be6745e4fe7b5fa1be17f8cc10355061c5a,PodSandboxId:ab14f5358d9fadb7e17be80246d1460c20c9a8544d2f2f010f22df8f5af25efc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760878700676696542,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: a8658129-3759-4db2-9c3c-eb1fcdf1cafa,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225cab54071a98879b1b5a29f76b0da52c4519db28baa7ee671b89ff6ec4a274,PodSandboxId:cddf00054fd2598e2143674364a05a177a82fea3357bf18c004febcbe61754c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760878700571724863,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7c7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1
f72bb7-b544-400c-87e6-706a26b9dc92,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f2c86f353c7e26c9ca09b04a2d2fdafffe2d2922ce8002806b13da74d02a16,PodSandboxId:0e219937ea70572ac280224736a6aec3ff932ddd1a9e075a3ca15574a408124d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760878696305070933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-621731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a419c8e2ca26fa1406d90b6d83c63e,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98cb4b7fb3769e7387c5a7b2e983add5b8aa6e524f7f69a6cd1ec66814d6526a,PodSandboxId:cb5f80f73dd2f6669256c2ce2dff33a9187c574bfd58b38bdd305a50434071a7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760878696279093049,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-621731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd2614cfc413ccde02ebb7db28f5229f,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc5cce59f83900e954f210fdf35a4196e3f8d91ebb5b77cbc9bb0637603fed20,PodSandboxId:6040904203d06b61d314fcbd49d6dde0b2f05a57b32b68ac4e3dd098bc930388,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760878696273167443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-621731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2be9f2bee982d59f15552f025ffcf93c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690d5b9a695e0b769b491b63f295785597c7677ad8b361615e7e9a85217c451e,PodSandboxId:a3a0a3ee412b192b3258e4c942a586d8c80a33001a85612319211e2c89bfa883,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760878696263577047,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-621731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bcce758218b478d1729e0b36d944004,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e81ea36-535f-41af-850a-4dafc9c61c06 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	36f0cacae7ae8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   13 seconds ago      Running             coredns                   1                   307c3c9367fb8       coredns-668d6bf9bc-qmdvd
	b0336727e7557       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   ab14f5358d9fa       storage-provisioner
	225cab54071a9       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   16 seconds ago      Running             kube-proxy                1                   cddf00054fd25       kube-proxy-w7c7c
	38f2c86f353c7       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   20 seconds ago      Running             etcd                      1                   0e219937ea705       etcd-test-preload-621731
	98cb4b7fb3769       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   20 seconds ago      Running             kube-apiserver            1                   cb5f80f73dd2f       kube-apiserver-test-preload-621731
	cc5cce59f8390       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   20 seconds ago      Running             kube-scheduler            1                   6040904203d06       kube-scheduler-test-preload-621731
	690d5b9a695e0       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   20 seconds ago      Running             kube-controller-manager   1                   a3a0a3ee412b1       kube-controller-manager-test-preload-621731
	
	
	==> coredns [36f0cacae7ae87695db5d1d792cd901f314400923e624016e2c4bc47b5fa3318] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44267 - 40290 "HINFO IN 2926686472385261707.7142597973237914438. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036880968s
	
	
	==> describe nodes <==
	Name:               test-preload-621731
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-621731
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=test-preload-621731
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_56_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:56:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-621731
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:58:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:58:21 +0000   Sun, 19 Oct 2025 12:56:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:58:21 +0000   Sun, 19 Oct 2025 12:56:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:58:21 +0000   Sun, 19 Oct 2025 12:56:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:58:21 +0000   Sun, 19 Oct 2025 12:58:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.51
	  Hostname:    test-preload-621731
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 7fc7def6150a4923854f681971690a0b
	  System UUID:                7fc7def6-150a-4923-854f-681971690a0b
	  Boot ID:                    5c211ba6-d288-43ef-82f5-dbb784eaaac4
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-qmdvd                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     107s
	  kube-system                 etcd-test-preload-621731                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         112s
	  kube-system                 kube-apiserver-test-preload-621731             250m (12%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-test-preload-621731    200m (10%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-w7c7c                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-test-preload-621731             100m (5%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 16s                kube-proxy       
	  Normal   Starting                 106s               kube-proxy       
	  Normal   Starting                 113s               kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  113s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     112s               kubelet          Node test-preload-621731 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  112s               kubelet          Node test-preload-621731 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    112s               kubelet          Node test-preload-621731 status is now: NodeHasNoDiskPressure
	  Normal   NodeReady                111s               kubelet          Node test-preload-621731 status is now: NodeReady
	  Normal   RegisteredNode           109s               node-controller  Node test-preload-621731 event: Registered Node test-preload-621731 in Controller
	  Normal   CIDRAssignmentFailed     108s               cidrAllocator    Node test-preload-621731 status is now: CIDRAssignmentFailed
	  Normal   Starting                 23s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node test-preload-621731 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node test-preload-621731 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node test-preload-621731 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18s                kubelet          Node test-preload-621731 has been rebooted, boot id: 5c211ba6-d288-43ef-82f5-dbb784eaaac4
	  Normal   RegisteredNode           15s                node-controller  Node test-preload-621731 event: Registered Node test-preload-621731 in Controller
	
	
	==> dmesg <==
	[Oct19 12:57] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001139] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Oct19 12:58] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.016183] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.081491] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.101029] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.513298] kauditd_printk_skb: 177 callbacks suppressed
	[  +7.598990] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [38f2c86f353c7e26c9ca09b04a2d2fdafffe2d2922ce8002806b13da74d02a16] <==
	{"level":"info","ts":"2025-10-19T12:58:16.650848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a switched to configuration voters=(10397020729048077610)"}
	{"level":"info","ts":"2025-10-19T12:58:16.652700Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-19T12:58:16.652851Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ec92057c53901c6c","local-member-id":"9049a3446d48952a","added-peer-id":"9049a3446d48952a","added-peer-peer-urls":["https://192.168.39.51:2380"]}
	{"level":"info","ts":"2025-10-19T12:58:16.656123Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ec92057c53901c6c","local-member-id":"9049a3446d48952a","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T12:58:16.656165Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T12:58:16.660140Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"9049a3446d48952a","initial-advertise-peer-urls":["https://192.168.39.51:2380"],"listen-peer-urls":["https://192.168.39.51:2380"],"advertise-client-urls":["https://192.168.39.51:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.51:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-19T12:58:16.660203Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-19T12:58:16.654381Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.51:2380"}
	{"level":"info","ts":"2025-10-19T12:58:16.660534Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.51:2380"}
	{"level":"info","ts":"2025-10-19T12:58:18.317975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-19T12:58:18.318086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-19T12:58:18.318108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a received MsgPreVoteResp from 9049a3446d48952a at term 2"}
	{"level":"info","ts":"2025-10-19T12:58:18.318121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a became candidate at term 3"}
	{"level":"info","ts":"2025-10-19T12:58:18.318127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a received MsgVoteResp from 9049a3446d48952a at term 3"}
	{"level":"info","ts":"2025-10-19T12:58:18.318135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a became leader at term 3"}
	{"level":"info","ts":"2025-10-19T12:58:18.318142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9049a3446d48952a elected leader 9049a3446d48952a at term 3"}
	{"level":"info","ts":"2025-10-19T12:58:18.320791Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"9049a3446d48952a","local-member-attributes":"{Name:test-preload-621731 ClientURLs:[https://192.168.39.51:2379]}","request-path":"/0/members/9049a3446d48952a/attributes","cluster-id":"ec92057c53901c6c","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-19T12:58:18.320800Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T12:58:18.320823Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T12:58:18.321765Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-19T12:58:18.322353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-19T12:58:18.321765Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-19T12:58:18.323184Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.51:2379"}
	{"level":"info","ts":"2025-10-19T12:58:18.322422Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-19T12:58:18.326982Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 12:58:37 up 0 min,  0 users,  load average: 0.31, 0.09, 0.03
	Linux test-preload-621731 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [98cb4b7fb3769e7387c5a7b2e983add5b8aa6e524f7f69a6cd1ec66814d6526a] <==
	I1019 12:58:19.479589       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1019 12:58:19.486094       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1019 12:58:19.486839       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 12:58:19.496783       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 12:58:19.496859       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 12:58:19.496887       1 shared_informer.go:320] Caches are synced for configmaps
	I1019 12:58:19.496988       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 12:58:19.497066       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 12:58:19.506976       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 12:58:19.520146       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1019 12:58:19.520727       1 aggregator.go:171] initial CRD sync complete...
	I1019 12:58:19.520777       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 12:58:19.520795       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 12:58:19.520811       1 cache.go:39] Caches are synced for autoregister controller
	E1019 12:58:19.541806       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 12:58:19.565718       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1019 12:58:20.153774       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1019 12:58:20.389122       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 12:58:21.360998       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1019 12:58:21.405521       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1019 12:58:21.451628       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 12:58:21.461500       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 12:58:22.894185       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 12:58:22.994539       1 controller.go:615] quota admission added evaluator for: endpoints
	I1019 12:58:23.046235       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [690d5b9a695e0b769b491b63f295785597c7677ad8b361615e7e9a85217c451e] <==
	I1019 12:58:22.704509       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1019 12:58:22.706081       1 shared_informer.go:320] Caches are synced for garbage collector
	I1019 12:58:22.706241       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 12:58:22.706271       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 12:58:22.707093       1 shared_informer.go:320] Caches are synced for namespace
	I1019 12:58:22.708046       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1019 12:58:22.710656       1 shared_informer.go:320] Caches are synced for PVC protection
	I1019 12:58:22.711653       1 shared_informer.go:320] Caches are synced for resource quota
	I1019 12:58:22.718154       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1019 12:58:22.719127       1 shared_informer.go:320] Caches are synced for stateful set
	I1019 12:58:22.726389       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1019 12:58:22.733685       1 shared_informer.go:320] Caches are synced for garbage collector
	I1019 12:58:22.735832       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1019 12:58:22.740379       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1019 12:58:22.740435       1 shared_informer.go:320] Caches are synced for deployment
	I1019 12:58:22.741179       1 shared_informer.go:320] Caches are synced for expand
	I1019 12:58:22.747855       1 shared_informer.go:320] Caches are synced for daemon sets
	I1019 12:58:22.757428       1 shared_informer.go:320] Caches are synced for ephemeral
	I1019 12:58:22.773074       1 shared_informer.go:320] Caches are synced for cronjob
	I1019 12:58:22.774279       1 shared_informer.go:320] Caches are synced for PV protection
	I1019 12:58:23.057174       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="348.518371ms"
	I1019 12:58:23.057844       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="391.066µs"
	I1019 12:58:25.250863       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="96.569µs"
	I1019 12:58:31.587588       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="10.655663ms"
	I1019 12:58:31.588231       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="56.34µs"
	
	
	==> kube-proxy [225cab54071a98879b1b5a29f76b0da52c4519db28baa7ee671b89ff6ec4a274] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1019 12:58:20.830579       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1019 12:58:20.838795       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.51"]
	E1019 12:58:20.838861       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:58:20.874113       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1019 12:58:20.874149       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1019 12:58:20.874186       1 server_linux.go:170] "Using iptables Proxier"
	I1019 12:58:20.876841       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:58:20.877173       1 server.go:497] "Version info" version="v1.32.0"
	I1019 12:58:20.877398       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:58:20.878929       1 config.go:199] "Starting service config controller"
	I1019 12:58:20.879102       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1019 12:58:20.879205       1 config.go:105] "Starting endpoint slice config controller"
	I1019 12:58:20.879294       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1019 12:58:20.881700       1 config.go:329] "Starting node config controller"
	I1019 12:58:20.881728       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1019 12:58:20.980333       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1019 12:58:20.980365       1 shared_informer.go:320] Caches are synced for service config
	I1019 12:58:20.981946       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [cc5cce59f83900e954f210fdf35a4196e3f8d91ebb5b77cbc9bb0637603fed20] <==
	I1019 12:58:17.314533       1 serving.go:386] Generated self-signed cert in-memory
	W1019 12:58:19.453597       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 12:58:19.453635       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 12:58:19.453645       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 12:58:19.453651       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 12:58:19.515618       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1019 12:58:19.515763       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:58:19.519807       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1019 12:58:19.520012       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:58:19.522769       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1019 12:58:19.520028       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 12:58:19.623016       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 19 12:58:19 test-preload-621731 kubelet[1149]: E1019 12:58:19.571819    1149 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-621731\" already exists" pod="kube-system/kube-controller-manager-test-preload-621731"
	Oct 19 12:58:19 test-preload-621731 kubelet[1149]: I1019 12:58:19.571846    1149 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-621731"
	Oct 19 12:58:19 test-preload-621731 kubelet[1149]: E1019 12:58:19.586457    1149 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-621731\" already exists" pod="kube-system/kube-scheduler-test-preload-621731"
	Oct 19 12:58:19 test-preload-621731 kubelet[1149]: I1019 12:58:19.586494    1149 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-621731"
	Oct 19 12:58:19 test-preload-621731 kubelet[1149]: E1019 12:58:19.604715    1149 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-621731\" already exists" pod="kube-system/kube-apiserver-test-preload-621731"
	Oct 19 12:58:19 test-preload-621731 kubelet[1149]: I1019 12:58:19.604759    1149 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-621731"
	Oct 19 12:58:19 test-preload-621731 kubelet[1149]: E1019 12:58:19.615686    1149 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-621731\" already exists" pod="kube-system/etcd-test-preload-621731"
	Oct 19 12:58:20 test-preload-621731 kubelet[1149]: I1019 12:58:20.083728    1149 apiserver.go:52] "Watching apiserver"
	Oct 19 12:58:20 test-preload-621731 kubelet[1149]: E1019 12:58:20.091535    1149 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-qmdvd" podUID="9dcfcbd3-b9cf-4ba1-acfa-faa69a9bb071"
	Oct 19 12:58:20 test-preload-621731 kubelet[1149]: I1019 12:58:20.111565    1149 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 19 12:58:20 test-preload-621731 kubelet[1149]: I1019 12:58:20.146243    1149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a8658129-3759-4db2-9c3c-eb1fcdf1cafa-tmp\") pod \"storage-provisioner\" (UID: \"a8658129-3759-4db2-9c3c-eb1fcdf1cafa\") " pod="kube-system/storage-provisioner"
	Oct 19 12:58:20 test-preload-621731 kubelet[1149]: I1019 12:58:20.146314    1149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1f72bb7-b544-400c-87e6-706a26b9dc92-lib-modules\") pod \"kube-proxy-w7c7c\" (UID: \"b1f72bb7-b544-400c-87e6-706a26b9dc92\") " pod="kube-system/kube-proxy-w7c7c"
	Oct 19 12:58:20 test-preload-621731 kubelet[1149]: I1019 12:58:20.146342    1149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1f72bb7-b544-400c-87e6-706a26b9dc92-xtables-lock\") pod \"kube-proxy-w7c7c\" (UID: \"b1f72bb7-b544-400c-87e6-706a26b9dc92\") " pod="kube-system/kube-proxy-w7c7c"
	Oct 19 12:58:20 test-preload-621731 kubelet[1149]: E1019 12:58:20.146621    1149 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 19 12:58:20 test-preload-621731 kubelet[1149]: E1019 12:58:20.146689    1149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9dcfcbd3-b9cf-4ba1-acfa-faa69a9bb071-config-volume podName:9dcfcbd3-b9cf-4ba1-acfa-faa69a9bb071 nodeName:}" failed. No retries permitted until 2025-10-19 12:58:20.646672008 +0000 UTC m=+6.656488175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9dcfcbd3-b9cf-4ba1-acfa-faa69a9bb071-config-volume") pod "coredns-668d6bf9bc-qmdvd" (UID: "9dcfcbd3-b9cf-4ba1-acfa-faa69a9bb071") : object "kube-system"/"coredns" not registered
	Oct 19 12:58:20 test-preload-621731 kubelet[1149]: E1019 12:58:20.649760    1149 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 19 12:58:20 test-preload-621731 kubelet[1149]: E1019 12:58:20.649806    1149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9dcfcbd3-b9cf-4ba1-acfa-faa69a9bb071-config-volume podName:9dcfcbd3-b9cf-4ba1-acfa-faa69a9bb071 nodeName:}" failed. No retries permitted until 2025-10-19 12:58:21.649794332 +0000 UTC m=+7.659610499 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9dcfcbd3-b9cf-4ba1-acfa-faa69a9bb071-config-volume") pod "coredns-668d6bf9bc-qmdvd" (UID: "9dcfcbd3-b9cf-4ba1-acfa-faa69a9bb071") : object "kube-system"/"coredns" not registered
	Oct 19 12:58:21 test-preload-621731 kubelet[1149]: E1019 12:58:21.655228    1149 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 19 12:58:21 test-preload-621731 kubelet[1149]: E1019 12:58:21.655318    1149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9dcfcbd3-b9cf-4ba1-acfa-faa69a9bb071-config-volume podName:9dcfcbd3-b9cf-4ba1-acfa-faa69a9bb071 nodeName:}" failed. No retries permitted until 2025-10-19 12:58:23.655303554 +0000 UTC m=+9.665119721 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9dcfcbd3-b9cf-4ba1-acfa-faa69a9bb071-config-volume") pod "coredns-668d6bf9bc-qmdvd" (UID: "9dcfcbd3-b9cf-4ba1-acfa-faa69a9bb071") : object "kube-system"/"coredns" not registered
	Oct 19 12:58:21 test-preload-621731 kubelet[1149]: I1019 12:58:21.660281    1149 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Oct 19 12:58:24 test-preload-621731 kubelet[1149]: E1019 12:58:24.182282    1149 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760878704181695092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 19 12:58:24 test-preload-621731 kubelet[1149]: E1019 12:58:24.182307    1149 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760878704181695092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 19 12:58:31 test-preload-621731 kubelet[1149]: I1019 12:58:31.559011    1149 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 19 12:58:34 test-preload-621731 kubelet[1149]: E1019 12:58:34.183433    1149 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760878714183188919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 19 12:58:34 test-preload-621731 kubelet[1149]: E1019 12:58:34.183471    1149 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760878714183188919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [b0336727e7557eeeb65730afde9a4be6745e4fe7b5fa1be17f8cc10355061c5a] <==
	I1019 12:58:20.790363       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-621731 -n test-preload-621731
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-621731 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-621731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-621731
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-621731: (1.03983811s)
--- FAIL: TestPreload (166.68s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (376.56s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-969331 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p pause-969331 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 80 (6m14.396792694s)

                                                
                                                
-- stdout --
	* [pause-969331] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-144655/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-144655/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-969331" primary control-plane node in "pause-969331" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 13:05:05.653169  187539 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:05:05.653528  187539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:05:05.653548  187539 out.go:374] Setting ErrFile to fd 2...
	I1019 13:05:05.653556  187539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:05:05.653858  187539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-144655/.minikube/bin
	I1019 13:05:05.654599  187539 out.go:368] Setting JSON to false
	I1019 13:05:05.656021  187539 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6440,"bootTime":1760872666,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 13:05:05.656158  187539 start.go:141] virtualization: kvm guest
	I1019 13:05:05.658410  187539 out.go:179] * [pause-969331] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 13:05:05.659478  187539 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:05:05.659501  187539 notify.go:220] Checking for updates...
	I1019 13:05:05.663307  187539 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:05:05.664488  187539 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-144655/kubeconfig
	I1019 13:05:05.665600  187539 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-144655/.minikube
	I1019 13:05:05.669713  187539 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 13:05:05.670842  187539 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:05:05.672317  187539 config.go:182] Loaded profile config "pause-969331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:05:05.672957  187539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 13:05:05.673021  187539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 13:05:05.687983  187539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36063
	I1019 13:05:05.688589  187539 main.go:141] libmachine: () Calling .GetVersion
	I1019 13:05:05.689213  187539 main.go:141] libmachine: Using API Version  1
	I1019 13:05:05.689251  187539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 13:05:05.689737  187539 main.go:141] libmachine: () Calling .GetMachineName
	I1019 13:05:05.689975  187539 main.go:141] libmachine: (pause-969331) Calling .DriverName
	I1019 13:05:05.690633  187539 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:05:05.691095  187539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 13:05:05.691152  187539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 13:05:05.709787  187539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38179
	I1019 13:05:05.710577  187539 main.go:141] libmachine: () Calling .GetVersion
	I1019 13:05:05.711186  187539 main.go:141] libmachine: Using API Version  1
	I1019 13:05:05.711246  187539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 13:05:05.711751  187539 main.go:141] libmachine: () Calling .GetMachineName
	I1019 13:05:05.711971  187539 main.go:141] libmachine: (pause-969331) Calling .DriverName
	I1019 13:05:05.749478  187539 out.go:179] * Using the kvm2 driver based on existing profile
	I1019 13:05:05.750763  187539 start.go:305] selected driver: kvm2
	I1019 13:05:05.750786  187539 start.go:925] validating driver "kvm2" against &{Name:pause-969331 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-969331 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.162 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:05:05.751040  187539 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:05:05.751538  187539 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:05:05.751627  187539 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21772-144655/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 13:05:05.765790  187539 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 13:05:05.765823  187539 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21772-144655/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 13:05:05.780859  187539 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 13:05:05.781881  187539 cni.go:84] Creating CNI manager for ""
	I1019 13:05:05.781958  187539 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 13:05:05.782042  187539 start.go:349] cluster config:
	{Name:pause-969331 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-969331 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.162 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:05:05.782201  187539 iso.go:125] acquiring lock: {Name:mk95990edcd162f08eff1d65580753d7d9806693 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:05:05.783457  187539 out.go:179] * Starting "pause-969331" primary control-plane node in "pause-969331" cluster
	I1019 13:05:05.784296  187539 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:05:05.784344  187539 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-144655/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 13:05:05.784354  187539 cache.go:58] Caching tarball of preloaded images
	I1019 13:05:05.784429  187539 preload.go:233] Found /home/jenkins/minikube-integration/21772-144655/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 13:05:05.784439  187539 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 13:05:05.784578  187539 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/pause-969331/config.json ...
	I1019 13:05:05.784855  187539 start.go:360] acquireMachinesLock for pause-969331: {Name:mk205e9aa7c82fb04c974fad7345827c2806baf1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1019 13:05:28.947138  187539 start.go:364] duration metric: took 23.162252372s to acquireMachinesLock for "pause-969331"
	I1019 13:05:28.947185  187539 start.go:96] Skipping create...Using existing machine configuration
	I1019 13:05:28.947194  187539 fix.go:54] fixHost starting: 
	I1019 13:05:28.947814  187539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 13:05:28.947868  187539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 13:05:28.966718  187539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46273
	I1019 13:05:28.967426  187539 main.go:141] libmachine: () Calling .GetVersion
	I1019 13:05:28.968012  187539 main.go:141] libmachine: Using API Version  1
	I1019 13:05:28.968041  187539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 13:05:28.968459  187539 main.go:141] libmachine: () Calling .GetMachineName
	I1019 13:05:28.968669  187539 main.go:141] libmachine: (pause-969331) Calling .DriverName
	I1019 13:05:28.968833  187539 main.go:141] libmachine: (pause-969331) Calling .GetState
	I1019 13:05:28.971085  187539 fix.go:112] recreateIfNeeded on pause-969331: state=Running err=<nil>
	W1019 13:05:28.971110  187539 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 13:05:28.974737  187539 out.go:252] * Updating the running kvm2 "pause-969331" VM ...
	I1019 13:05:28.974768  187539 machine.go:93] provisionDockerMachine start ...
	I1019 13:05:28.974784  187539 main.go:141] libmachine: (pause-969331) Calling .DriverName
	I1019 13:05:28.975060  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHHostname
	I1019 13:05:28.979145  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:28.979771  187539 main.go:141] libmachine: (pause-969331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:3e:2a", ip: ""} in network mk-pause-969331: {Iface:virbr4 ExpiryTime:2025-10-19 14:03:59 +0000 UTC Type:0 Mac:52:54:00:32:3e:2a Iaid: IPaddr:192.168.72.162 Prefix:24 Hostname:pause-969331 Clientid:01:52:54:00:32:3e:2a}
	I1019 13:05:28.979794  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined IP address 192.168.72.162 and MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:28.980033  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHPort
	I1019 13:05:28.980227  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHKeyPath
	I1019 13:05:28.980387  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHKeyPath
	I1019 13:05:28.980520  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHUsername
	I1019 13:05:28.980723  187539 main.go:141] libmachine: Using SSH client type: native
	I1019 13:05:28.981136  187539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 192.168.72.162 22 <nil> <nil>}
	I1019 13:05:28.981153  187539 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 13:05:29.101106  187539 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-969331
	
	I1019 13:05:29.101163  187539 main.go:141] libmachine: (pause-969331) Calling .GetMachineName
	I1019 13:05:29.101507  187539 buildroot.go:166] provisioning hostname "pause-969331"
	I1019 13:05:29.101541  187539 main.go:141] libmachine: (pause-969331) Calling .GetMachineName
	I1019 13:05:29.101792  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHHostname
	I1019 13:05:29.106374  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:29.218350  187539 main.go:141] libmachine: (pause-969331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:3e:2a", ip: ""} in network mk-pause-969331: {Iface:virbr4 ExpiryTime:2025-10-19 14:03:59 +0000 UTC Type:0 Mac:52:54:00:32:3e:2a Iaid: IPaddr:192.168.72.162 Prefix:24 Hostname:pause-969331 Clientid:01:52:54:00:32:3e:2a}
	I1019 13:05:29.218393  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined IP address 192.168.72.162 and MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:29.218849  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHPort
	I1019 13:05:29.219151  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHKeyPath
	I1019 13:05:29.219400  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHKeyPath
	I1019 13:05:29.219613  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHUsername
	I1019 13:05:29.219861  187539 main.go:141] libmachine: Using SSH client type: native
	I1019 13:05:29.220188  187539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 192.168.72.162 22 <nil> <nil>}
	I1019 13:05:29.220212  187539 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-969331 && echo "pause-969331" | sudo tee /etc/hostname
	I1019 13:05:29.349753  187539 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-969331
	
	I1019 13:05:29.349786  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHHostname
	I1019 13:05:29.354030  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:29.354630  187539 main.go:141] libmachine: (pause-969331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:3e:2a", ip: ""} in network mk-pause-969331: {Iface:virbr4 ExpiryTime:2025-10-19 14:03:59 +0000 UTC Type:0 Mac:52:54:00:32:3e:2a Iaid: IPaddr:192.168.72.162 Prefix:24 Hostname:pause-969331 Clientid:01:52:54:00:32:3e:2a}
	I1019 13:05:29.354663  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined IP address 192.168.72.162 and MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:29.354989  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHPort
	I1019 13:05:29.355205  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHKeyPath
	I1019 13:05:29.355421  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHKeyPath
	I1019 13:05:29.355564  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHUsername
	I1019 13:05:29.355751  187539 main.go:141] libmachine: Using SSH client type: native
	I1019 13:05:29.355985  187539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 192.168.72.162 22 <nil> <nil>}
	I1019 13:05:29.356000  187539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-969331' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-969331/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-969331' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 13:05:29.470824  187539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 13:05:29.470864  187539 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21772-144655/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-144655/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-144655/.minikube}
	I1019 13:05:29.470911  187539 buildroot.go:174] setting up certificates
	I1019 13:05:29.470928  187539 provision.go:84] configureAuth start
	I1019 13:05:29.470941  187539 main.go:141] libmachine: (pause-969331) Calling .GetMachineName
	I1019 13:05:29.471336  187539 main.go:141] libmachine: (pause-969331) Calling .GetIP
	I1019 13:05:29.474921  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:29.475415  187539 main.go:141] libmachine: (pause-969331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:3e:2a", ip: ""} in network mk-pause-969331: {Iface:virbr4 ExpiryTime:2025-10-19 14:03:59 +0000 UTC Type:0 Mac:52:54:00:32:3e:2a Iaid: IPaddr:192.168.72.162 Prefix:24 Hostname:pause-969331 Clientid:01:52:54:00:32:3e:2a}
	I1019 13:05:29.475436  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined IP address 192.168.72.162 and MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:29.475666  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHHostname
	I1019 13:05:29.478461  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:29.478888  187539 main.go:141] libmachine: (pause-969331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:3e:2a", ip: ""} in network mk-pause-969331: {Iface:virbr4 ExpiryTime:2025-10-19 14:03:59 +0000 UTC Type:0 Mac:52:54:00:32:3e:2a Iaid: IPaddr:192.168.72.162 Prefix:24 Hostname:pause-969331 Clientid:01:52:54:00:32:3e:2a}
	I1019 13:05:29.478914  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined IP address 192.168.72.162 and MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:29.479087  187539 provision.go:143] copyHostCerts
	I1019 13:05:29.479150  187539 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-144655/.minikube/ca.pem, removing ...
	I1019 13:05:29.479173  187539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-144655/.minikube/ca.pem
	I1019 13:05:29.479252  187539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-144655/.minikube/ca.pem (1078 bytes)
	I1019 13:05:29.479411  187539 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-144655/.minikube/cert.pem, removing ...
	I1019 13:05:29.479423  187539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-144655/.minikube/cert.pem
	I1019 13:05:29.479465  187539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-144655/.minikube/cert.pem (1123 bytes)
	I1019 13:05:29.479554  187539 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-144655/.minikube/key.pem, removing ...
	I1019 13:05:29.479565  187539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-144655/.minikube/key.pem
	I1019 13:05:29.479595  187539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-144655/.minikube/key.pem (1675 bytes)
	I1019 13:05:29.479671  187539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-144655/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca-key.pem org=jenkins.pause-969331 san=[127.0.0.1 192.168.72.162 localhost minikube pause-969331]
	I1019 13:05:29.964898  187539 provision.go:177] copyRemoteCerts
	I1019 13:05:29.964965  187539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 13:05:29.964997  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHHostname
	I1019 13:05:29.968342  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:29.968846  187539 main.go:141] libmachine: (pause-969331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:3e:2a", ip: ""} in network mk-pause-969331: {Iface:virbr4 ExpiryTime:2025-10-19 14:03:59 +0000 UTC Type:0 Mac:52:54:00:32:3e:2a Iaid: IPaddr:192.168.72.162 Prefix:24 Hostname:pause-969331 Clientid:01:52:54:00:32:3e:2a}
	I1019 13:05:29.968880  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined IP address 192.168.72.162 and MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:29.969112  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHPort
	I1019 13:05:29.969347  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHKeyPath
	I1019 13:05:29.969631  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHUsername
	I1019 13:05:29.969832  187539 sshutil.go:53] new ssh client: &{IP:192.168.72.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/pause-969331/id_rsa Username:docker}
	I1019 13:05:30.057800  187539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1019 13:05:30.089268  187539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1019 13:05:30.125319  187539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 13:05:30.156167  187539 provision.go:87] duration metric: took 685.221082ms to configureAuth
	I1019 13:05:30.156211  187539 buildroot.go:189] setting minikube options for container-runtime
	I1019 13:05:30.156485  187539 config.go:182] Loaded profile config "pause-969331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:05:30.156566  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHHostname
	I1019 13:05:30.160061  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:30.160565  187539 main.go:141] libmachine: (pause-969331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:3e:2a", ip: ""} in network mk-pause-969331: {Iface:virbr4 ExpiryTime:2025-10-19 14:03:59 +0000 UTC Type:0 Mac:52:54:00:32:3e:2a Iaid: IPaddr:192.168.72.162 Prefix:24 Hostname:pause-969331 Clientid:01:52:54:00:32:3e:2a}
	I1019 13:05:30.160599  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined IP address 192.168.72.162 and MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:30.160849  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHPort
	I1019 13:05:30.161088  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHKeyPath
	I1019 13:05:30.161308  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHKeyPath
	I1019 13:05:30.161480  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHUsername
	I1019 13:05:30.161680  187539 main.go:141] libmachine: Using SSH client type: native
	I1019 13:05:30.161911  187539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 192.168.72.162 22 <nil> <nil>}
	I1019 13:05:30.161932  187539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 13:05:37.340666  187539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 13:05:37.340705  187539 machine.go:96] duration metric: took 8.365927487s to provisionDockerMachine
	I1019 13:05:37.340726  187539 start.go:293] postStartSetup for "pause-969331" (driver="kvm2")
	I1019 13:05:37.340743  187539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 13:05:37.340770  187539 main.go:141] libmachine: (pause-969331) Calling .DriverName
	I1019 13:05:37.341375  187539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 13:05:37.341415  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHHostname
	I1019 13:05:37.345012  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:37.345520  187539 main.go:141] libmachine: (pause-969331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:3e:2a", ip: ""} in network mk-pause-969331: {Iface:virbr4 ExpiryTime:2025-10-19 14:03:59 +0000 UTC Type:0 Mac:52:54:00:32:3e:2a Iaid: IPaddr:192.168.72.162 Prefix:24 Hostname:pause-969331 Clientid:01:52:54:00:32:3e:2a}
	I1019 13:05:37.345551  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined IP address 192.168.72.162 and MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:37.345735  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHPort
	I1019 13:05:37.345948  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHKeyPath
	I1019 13:05:37.346163  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHUsername
	I1019 13:05:37.346358  187539 sshutil.go:53] new ssh client: &{IP:192.168.72.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/pause-969331/id_rsa Username:docker}
	I1019 13:05:37.477372  187539 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 13:05:37.494675  187539 info.go:137] Remote host: Buildroot 2025.02
	I1019 13:05:37.494701  187539 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-144655/.minikube/addons for local assets ...
	I1019 13:05:37.494783  187539 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-144655/.minikube/files for local assets ...
	I1019 13:05:37.494879  187539 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-144655/.minikube/files/etc/ssl/certs/1487012.pem -> 1487012.pem in /etc/ssl/certs
	I1019 13:05:37.495008  187539 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 13:05:37.520682  187539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/files/etc/ssl/certs/1487012.pem --> /etc/ssl/certs/1487012.pem (1708 bytes)
	I1019 13:05:37.586183  187539 start.go:296] duration metric: took 245.434154ms for postStartSetup
	I1019 13:05:37.586250  187539 fix.go:56] duration metric: took 8.639055296s for fixHost
	I1019 13:05:37.586297  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHHostname
	I1019 13:05:37.590160  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:37.590748  187539 main.go:141] libmachine: (pause-969331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:3e:2a", ip: ""} in network mk-pause-969331: {Iface:virbr4 ExpiryTime:2025-10-19 14:03:59 +0000 UTC Type:0 Mac:52:54:00:32:3e:2a Iaid: IPaddr:192.168.72.162 Prefix:24 Hostname:pause-969331 Clientid:01:52:54:00:32:3e:2a}
	I1019 13:05:37.590785  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined IP address 192.168.72.162 and MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:37.591048  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHPort
	I1019 13:05:37.591319  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHKeyPath
	I1019 13:05:37.591553  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHKeyPath
	I1019 13:05:37.591747  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHUsername
	I1019 13:05:37.591930  187539 main.go:141] libmachine: Using SSH client type: native
	I1019 13:05:37.592247  187539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 192.168.72.162 22 <nil> <nil>}
	I1019 13:05:37.592261  187539 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1019 13:05:37.765135  187539 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760879137.760396120
	
	I1019 13:05:37.765183  187539 fix.go:216] guest clock: 1760879137.760396120
	I1019 13:05:37.765200  187539 fix.go:229] Guest: 2025-10-19 13:05:37.76039612 +0000 UTC Remote: 2025-10-19 13:05:37.586255902 +0000 UTC m=+31.976475206 (delta=174.140218ms)
	I1019 13:05:37.765228  187539 fix.go:200] guest clock delta is within tolerance: 174.140218ms
	I1019 13:05:37.765233  187539 start.go:83] releasing machines lock for "pause-969331", held for 8.818070746s
	I1019 13:05:37.765264  187539 main.go:141] libmachine: (pause-969331) Calling .DriverName
	I1019 13:05:37.765615  187539 main.go:141] libmachine: (pause-969331) Calling .GetIP
	I1019 13:05:37.769708  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:37.770255  187539 main.go:141] libmachine: (pause-969331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:3e:2a", ip: ""} in network mk-pause-969331: {Iface:virbr4 ExpiryTime:2025-10-19 14:03:59 +0000 UTC Type:0 Mac:52:54:00:32:3e:2a Iaid: IPaddr:192.168.72.162 Prefix:24 Hostname:pause-969331 Clientid:01:52:54:00:32:3e:2a}
	I1019 13:05:37.770301  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined IP address 192.168.72.162 and MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:37.770508  187539 main.go:141] libmachine: (pause-969331) Calling .DriverName
	I1019 13:05:37.770994  187539 main.go:141] libmachine: (pause-969331) Calling .DriverName
	I1019 13:05:37.771203  187539 main.go:141] libmachine: (pause-969331) Calling .DriverName
	I1019 13:05:37.771317  187539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 13:05:37.771373  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHHostname
	I1019 13:05:37.771460  187539 ssh_runner.go:195] Run: cat /version.json
	I1019 13:05:37.771486  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHHostname
	I1019 13:05:37.774881  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:37.774917  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:37.775417  187539 main.go:141] libmachine: (pause-969331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:3e:2a", ip: ""} in network mk-pause-969331: {Iface:virbr4 ExpiryTime:2025-10-19 14:03:59 +0000 UTC Type:0 Mac:52:54:00:32:3e:2a Iaid: IPaddr:192.168.72.162 Prefix:24 Hostname:pause-969331 Clientid:01:52:54:00:32:3e:2a}
	I1019 13:05:37.775448  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined IP address 192.168.72.162 and MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:37.775581  187539 main.go:141] libmachine: (pause-969331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:3e:2a", ip: ""} in network mk-pause-969331: {Iface:virbr4 ExpiryTime:2025-10-19 14:03:59 +0000 UTC Type:0 Mac:52:54:00:32:3e:2a Iaid: IPaddr:192.168.72.162 Prefix:24 Hostname:pause-969331 Clientid:01:52:54:00:32:3e:2a}
	I1019 13:05:37.775610  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHPort
	I1019 13:05:37.775619  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined IP address 192.168.72.162 and MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:05:37.775834  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHKeyPath
	I1019 13:05:37.775864  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHPort
	I1019 13:05:37.776020  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHKeyPath
	I1019 13:05:37.776023  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHUsername
	I1019 13:05:37.776193  187539 main.go:141] libmachine: (pause-969331) Calling .GetSSHUsername
	I1019 13:05:37.776198  187539 sshutil.go:53] new ssh client: &{IP:192.168.72.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/pause-969331/id_rsa Username:docker}
	I1019 13:05:37.776357  187539 sshutil.go:53] new ssh client: &{IP:192.168.72.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/pause-969331/id_rsa Username:docker}
	I1019 13:05:37.936974  187539 ssh_runner.go:195] Run: systemctl --version
	I1019 13:05:37.947698  187539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 13:05:38.185621  187539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 13:05:38.202446  187539 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 13:05:38.202541  187539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 13:05:38.237726  187539 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 13:05:38.237761  187539 start.go:495] detecting cgroup driver to use...
	I1019 13:05:38.237856  187539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 13:05:38.275783  187539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 13:05:38.326527  187539 docker.go:218] disabling cri-docker service (if available) ...
	I1019 13:05:38.326589  187539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 13:05:38.374947  187539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 13:05:38.461057  187539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 13:05:38.734617  187539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 13:05:39.031379  187539 docker.go:234] disabling docker service ...
	I1019 13:05:39.031458  187539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 13:05:39.065586  187539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 13:05:39.084392  187539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 13:05:39.322845  187539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 13:05:39.588563  187539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 13:05:39.615049  187539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 13:05:39.654470  187539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 13:05:39.654554  187539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:39.669710  187539 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 13:05:39.669786  187539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:39.689773  187539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:39.706028  187539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:39.724075  187539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 13:05:39.742049  187539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:39.758893  187539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:39.773795  187539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:39.786488  187539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 13:05:39.802506  187539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 13:05:39.819558  187539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:05:40.077578  187539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 13:07:10.484260  187539 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.406625517s)
	I1019 13:07:10.484320  187539 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 13:07:10.484394  187539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 13:07:10.490353  187539 start.go:563] Will wait 60s for crictl version
	I1019 13:07:10.490448  187539 ssh_runner.go:195] Run: which crictl
	I1019 13:07:10.495048  187539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1019 13:07:10.537048  187539 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1019 13:07:10.537171  187539 ssh_runner.go:195] Run: crio --version
	I1019 13:07:10.570620  187539 ssh_runner.go:195] Run: crio --version
	I1019 13:07:10.610042  187539 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1019 13:07:10.611744  187539 main.go:141] libmachine: (pause-969331) Calling .GetIP
	I1019 13:07:10.616338  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:07:10.616848  187539 main.go:141] libmachine: (pause-969331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:3e:2a", ip: ""} in network mk-pause-969331: {Iface:virbr4 ExpiryTime:2025-10-19 14:03:59 +0000 UTC Type:0 Mac:52:54:00:32:3e:2a Iaid: IPaddr:192.168.72.162 Prefix:24 Hostname:pause-969331 Clientid:01:52:54:00:32:3e:2a}
	I1019 13:07:10.616877  187539 main.go:141] libmachine: (pause-969331) DBG | domain pause-969331 has defined IP address 192.168.72.162 and MAC address 52:54:00:32:3e:2a in network mk-pause-969331
	I1019 13:07:10.617235  187539 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1019 13:07:10.623184  187539 kubeadm.go:883] updating cluster {Name:pause-969331 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-969331 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.162 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 13:07:10.623352  187539 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:07:10.623407  187539 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:07:10.679208  187539 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:07:10.679249  187539 crio.go:433] Images already preloaded, skipping extraction
	I1019 13:07:10.679323  187539 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:07:10.721984  187539 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:07:10.722012  187539 cache_images.go:85] Images are preloaded, skipping loading
	I1019 13:07:10.722022  187539 kubeadm.go:934] updating node { 192.168.72.162 8443 v1.34.1 crio true true} ...
	I1019 13:07:10.722137  187539 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-969331 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-969331 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 13:07:10.722227  187539 ssh_runner.go:195] Run: crio config
	I1019 13:07:10.790695  187539 cni.go:84] Creating CNI manager for ""
	I1019 13:07:10.790724  187539 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 13:07:10.790746  187539 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 13:07:10.790777  187539 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.162 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-969331 NodeName:pause-969331 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 13:07:10.790946  187539 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-969331"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.162"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.162"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 13:07:10.791026  187539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 13:07:10.806053  187539 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 13:07:10.806129  187539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 13:07:10.818887  187539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1019 13:07:10.848361  187539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 13:07:10.879717  187539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1019 13:07:10.909370  187539 ssh_runner.go:195] Run: grep 192.168.72.162	control-plane.minikube.internal$ /etc/hosts
	I1019 13:07:10.914713  187539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:07:11.158765  187539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:07:11.185022  187539 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/pause-969331 for IP: 192.168.72.162
	I1019 13:07:11.185051  187539 certs.go:195] generating shared ca certs ...
	I1019 13:07:11.185075  187539 certs.go:227] acquiring lock for ca certs: {Name:mk3746b9a64228b33b458f684a19c91de0767499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:07:11.185269  187539 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-144655/.minikube/ca.key
	I1019 13:07:11.185342  187539 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-144655/.minikube/proxy-client-ca.key
	I1019 13:07:11.185358  187539 certs.go:257] generating profile certs ...
	I1019 13:07:11.185492  187539 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/pause-969331/client.key
	I1019 13:07:11.185642  187539 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/pause-969331/apiserver.key.be6b9810
	I1019 13:07:11.185718  187539 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/pause-969331/proxy-client.key
	I1019 13:07:11.185870  187539 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/148701.pem (1338 bytes)
	W1019 13:07:11.185922  187539 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-144655/.minikube/certs/148701_empty.pem, impossibly tiny 0 bytes
	I1019 13:07:11.185939  187539 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 13:07:11.185976  187539 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca.pem (1078 bytes)
	I1019 13:07:11.186012  187539 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/cert.pem (1123 bytes)
	I1019 13:07:11.186049  187539 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-144655/.minikube/certs/key.pem (1675 bytes)
	I1019 13:07:11.186111  187539 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-144655/.minikube/files/etc/ssl/certs/1487012.pem (1708 bytes)
	I1019 13:07:11.186912  187539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 13:07:11.220582  187539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 13:07:11.253813  187539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 13:07:11.290799  187539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 13:07:11.323983  187539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/pause-969331/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1019 13:07:11.354239  187539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/pause-969331/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 13:07:11.387240  187539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/pause-969331/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 13:07:11.423867  187539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/pause-969331/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 13:07:11.457077  187539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 13:07:11.488483  187539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/certs/148701.pem --> /usr/share/ca-certificates/148701.pem (1338 bytes)
	I1019 13:07:11.520523  187539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-144655/.minikube/files/etc/ssl/certs/1487012.pem --> /usr/share/ca-certificates/1487012.pem (1708 bytes)
	I1019 13:07:11.548217  187539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 13:07:11.567022  187539 ssh_runner.go:195] Run: openssl version
	I1019 13:07:11.573424  187539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 13:07:11.586064  187539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:07:11.590941  187539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:07 /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:07:11.591009  187539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:07:11.597754  187539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 13:07:11.608657  187539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148701.pem && ln -fs /usr/share/ca-certificates/148701.pem /etc/ssl/certs/148701.pem"
	I1019 13:07:11.622635  187539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148701.pem
	I1019 13:07:11.627851  187539 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:15 /usr/share/ca-certificates/148701.pem
	I1019 13:07:11.627943  187539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148701.pem
	I1019 13:07:11.634900  187539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/148701.pem /etc/ssl/certs/51391683.0"
	I1019 13:07:11.646233  187539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1487012.pem && ln -fs /usr/share/ca-certificates/1487012.pem /etc/ssl/certs/1487012.pem"
	I1019 13:07:11.659399  187539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1487012.pem
	I1019 13:07:11.664768  187539 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:15 /usr/share/ca-certificates/1487012.pem
	I1019 13:07:11.664821  187539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1487012.pem
	I1019 13:07:11.671475  187539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1487012.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 13:07:11.682092  187539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 13:07:11.686833  187539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 13:07:11.693447  187539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 13:07:11.700045  187539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 13:07:11.706656  187539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 13:07:11.713395  187539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 13:07:11.721343  187539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 13:07:11.728231  187539 kubeadm.go:400] StartCluster: {Name:pause-969331 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-969331 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.162 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:07:11.728367  187539 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 13:07:11.728424  187539 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 13:07:11.831603  187539 cri.go:89] found id: "deda0604a3881aa38c018190951670acd23bad65ab7094a72efb375d7e639be1"
	I1019 13:07:11.831625  187539 cri.go:89] found id: "8f49c8da32752ce8e7e7393d89714ed081e91555e7802492d22a65660214b818"
	I1019 13:07:11.831631  187539 cri.go:89] found id: "2fc4b2cebaf6604f4aa3db750ccb0ee244435bfc9f3423405c92ff4efcb85678"
	I1019 13:07:11.831634  187539 cri.go:89] found id: "c39b4c3bb62fe2128157d993c07111b9ae8927d89c6084d00a898bd3337ea481"
	I1019 13:07:11.831637  187539 cri.go:89] found id: "1c17deb73ba4849d3a92a56b2679a6e4acd0cc221356c77c3b99be352245a13f"
	I1019 13:07:11.831640  187539 cri.go:89] found id: "acb55a8aabc22d40b5cb529040cdee34be7bd0052b74455a1729a5663098f0fe"
	I1019 13:07:11.831643  187539 cri.go:89] found id: "dccf7368f1ad66da35dc97c01377bb9fee547f3ba45f5bb228c74b637f5c0874"
	I1019 13:07:11.831646  187539 cri.go:89] found id: "b281372c8ef8201c7224ec06c6c10efa1caa968ee76d139993c19965f87582c2"
	I1019 13:07:11.831648  187539 cri.go:89] found id: "5fdd6ec490453afde767a1798ad245ffe39bcf071d0ec0ab2e89807b2c967ae1"
	I1019 13:07:11.831656  187539 cri.go:89] found id: "895e8b188cb1fb6b994fedde2e3e0608db379e9352a758dfe9d8ef97d7a7e781"
	I1019 13:07:11.831658  187539 cri.go:89] found id: "5cf3f3dd679a0d967ce84be7c75015a8088c0fcfcbba0ff3b7a40759ea493749"
	I1019 13:07:11.831661  187539 cri.go:89] found id: ""
	I1019 13:07:11.831705  187539 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
pause_test.go:94: failed to second start a running minikube with args: "out/minikube-linux-amd64 start -p pause-969331 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-969331 -n pause-969331
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-969331 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-969331 logs -n 25: (1.429395502s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬────────────────
─────┬─────────────────────┐
	│ COMMAND │                                                                                                                                  ARGS                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼────────────────
─────┼─────────────────────┤
	│ ssh     │ -p flannel-422995 sudo systemctl status kubelet --all --full --no-pager                                                                                                                                                                                                 │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ -p flannel-422995 sudo systemctl cat kubelet --no-pager                                                                                                                                                                                                                 │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ -p flannel-422995 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                                                                                                                  │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ -p flannel-422995 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                                                                                 │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ -p flannel-422995 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                                                 │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ -p flannel-422995 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                                                  │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │                     │
	│ ssh     │ -p flannel-422995 sudo systemctl cat docker --no-pager                                                                                                                                                                                                                  │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ -p flannel-422995 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                                      │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ -p flannel-422995 sudo docker system info                                                                                                                                                                                                                               │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │                     │
	│ ssh     │ -p flannel-422995 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                                              │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │                     │
	│ ssh     │ -p flannel-422995 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                                              │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ -p flannel-422995 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                                         │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │                     │
	│ ssh     │ -p flannel-422995 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                                   │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ -p flannel-422995 sudo cri-dockerd --version                                                                                                                                                                                                                            │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ -p flannel-422995 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                                              │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │                     │
	│ ssh     │ -p flannel-422995 sudo systemctl cat containerd --no-pager                                                                                                                                                                                                              │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ -p flannel-422995 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                                       │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ -p flannel-422995 sudo cat /etc/containerd/config.toml                                                                                                                                                                                                                  │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ -p flannel-422995 sudo containerd config dump                                                                                                                                                                                                                           │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ -p flannel-422995 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                                    │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ -p flannel-422995 sudo systemctl cat crio --no-pager                                                                                                                                                                                                                    │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ -p flannel-422995 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                                          │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ -p flannel-422995 sudo crio config                                                                                                                                                                                                                                      │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ delete  │ -p flannel-422995                                                                                                                                                                                                                                                       │ flannel-422995         │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ start   │ -p old-k8s-version-725412 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0 │ old-k8s-version-725412 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴────────────────
─────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 13:11:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 13:11:10.443880  199330 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:11:10.444223  199330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:11:10.444234  199330 out.go:374] Setting ErrFile to fd 2...
	I1019 13:11:10.444240  199330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:11:10.444581  199330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-144655/.minikube/bin
	I1019 13:11:10.445248  199330 out.go:368] Setting JSON to false
	I1019 13:11:10.446757  199330 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6804,"bootTime":1760872666,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 13:11:10.446903  199330 start.go:141] virtualization: kvm guest
	I1019 13:11:10.448738  199330 out.go:179] * [old-k8s-version-725412] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 13:11:10.450203  199330 notify.go:220] Checking for updates...
	I1019 13:11:10.450236  199330 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:11:10.451266  199330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:11:10.452651  199330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-144655/kubeconfig
	I1019 13:11:10.454061  199330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-144655/.minikube
	I1019 13:11:10.455118  199330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 13:11:10.456097  199330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:11:10.457952  199330 config.go:182] Loaded profile config "bridge-422995": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:11:10.458114  199330 config.go:182] Loaded profile config "cert-expiration-426397": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:11:10.458268  199330 config.go:182] Loaded profile config "pause-969331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:11:10.458402  199330 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:11:10.503740  199330 out.go:179] * Using the kvm2 driver based on user configuration
	I1019 13:11:10.504772  199330 start.go:305] selected driver: kvm2
	I1019 13:11:10.504789  199330 start.go:925] validating driver "kvm2" against <nil>
	I1019 13:11:10.504801  199330 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:11:10.505736  199330 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:11:10.505829  199330 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21772-144655/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 13:11:10.521811  199330 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 13:11:10.521846  199330 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21772-144655/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 13:11:10.538201  199330 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 13:11:10.538250  199330 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 13:11:10.538541  199330 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:11:10.538572  199330 cni.go:84] Creating CNI manager for ""
	I1019 13:11:10.538618  199330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 13:11:10.538628  199330 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1019 13:11:10.538673  199330 start.go:349] cluster config:
	{Name:old-k8s-version-725412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-725412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:11:10.538770  199330 iso.go:125] acquiring lock: {Name:mk95990edcd162f08eff1d65580753d7d9806693 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:11:10.540848  199330 out.go:179] * Starting "old-k8s-version-725412" primary control-plane node in "old-k8s-version-725412" cluster
	I1019 13:11:10.541798  199330 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 13:11:10.541846  199330 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-144655/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1019 13:11:10.541861  199330 cache.go:58] Caching tarball of preloaded images
	I1019 13:11:10.541964  199330 preload.go:233] Found /home/jenkins/minikube-integration/21772-144655/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 13:11:10.541978  199330 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1019 13:11:10.542093  199330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/old-k8s-version-725412/config.json ...
	I1019 13:11:10.542120  199330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/old-k8s-version-725412/config.json: {Name:mkfe84b6bc603db1d8626fb783b4fe3bfbbc3cfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:11:10.542303  199330 start.go:360] acquireMachinesLock for old-k8s-version-725412: {Name:mk205e9aa7c82fb04c974fad7345827c2806baf1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1019 13:11:10.542352  199330 start.go:364] duration metric: took 25.719µs to acquireMachinesLock for "old-k8s-version-725412"
	I1019 13:11:10.542382  199330 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-725412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-725412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 13:11:10.542453  199330 start.go:125] createHost starting for "" (driver="kvm2")
	W1019 13:11:06.415352  187539 pod_ready.go:104] pod "kube-scheduler-pause-969331" is not "Ready", error: <nil>
	W1019 13:11:08.417017  187539 pod_ready.go:104] pod "kube-scheduler-pause-969331" is not "Ready", error: <nil>
	W1019 13:11:10.417890  187539 pod_ready.go:104] pod "kube-scheduler-pause-969331" is not "Ready", error: <nil>
	I1019 13:11:09.179812  197493 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:11:09.179835  197493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 13:11:09.179855  197493 main.go:141] libmachine: (bridge-422995) Calling .GetSSHHostname
	I1019 13:11:09.185076  197493 main.go:141] libmachine: (bridge-422995) DBG | domain bridge-422995 has defined MAC address 52:54:00:41:ff:d3 in network mk-bridge-422995
	I1019 13:11:09.185995  197493 main.go:141] libmachine: (bridge-422995) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:ff:d3", ip: ""} in network mk-bridge-422995: {Iface:virbr2 ExpiryTime:2025-10-19 14:10:42 +0000 UTC Type:0 Mac:52:54:00:41:ff:d3 Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:bridge-422995 Clientid:01:52:54:00:41:ff:d3}
	I1019 13:11:09.186230  197493 main.go:141] libmachine: (bridge-422995) DBG | domain bridge-422995 has defined IP address 192.168.50.17 and MAC address 52:54:00:41:ff:d3 in network mk-bridge-422995
	I1019 13:11:09.186327  197493 main.go:141] libmachine: (bridge-422995) Calling .GetSSHPort
	I1019 13:11:09.186721  197493 main.go:141] libmachine: (bridge-422995) Calling .GetSSHKeyPath
	I1019 13:11:09.186967  197493 main.go:141] libmachine: (bridge-422995) Calling .GetSSHUsername
	I1019 13:11:09.187321  197493 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/bridge-422995/id_rsa Username:docker}
	I1019 13:11:09.194841  197493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I1019 13:11:09.195384  197493 main.go:141] libmachine: () Calling .GetVersion
	I1019 13:11:09.195930  197493 main.go:141] libmachine: Using API Version  1
	I1019 13:11:09.195954  197493 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 13:11:09.196490  197493 main.go:141] libmachine: () Calling .GetMachineName
	I1019 13:11:09.196795  197493 main.go:141] libmachine: (bridge-422995) Calling .GetState
	I1019 13:11:09.198881  197493 main.go:141] libmachine: (bridge-422995) Calling .DriverName
	I1019 13:11:09.199120  197493 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 13:11:09.199137  197493 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 13:11:09.199157  197493 main.go:141] libmachine: (bridge-422995) Calling .GetSSHHostname
	I1019 13:11:09.202624  197493 main.go:141] libmachine: (bridge-422995) DBG | domain bridge-422995 has defined MAC address 52:54:00:41:ff:d3 in network mk-bridge-422995
	I1019 13:11:09.203128  197493 main.go:141] libmachine: (bridge-422995) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:ff:d3", ip: ""} in network mk-bridge-422995: {Iface:virbr2 ExpiryTime:2025-10-19 14:10:42 +0000 UTC Type:0 Mac:52:54:00:41:ff:d3 Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:bridge-422995 Clientid:01:52:54:00:41:ff:d3}
	I1019 13:11:09.203171  197493 main.go:141] libmachine: (bridge-422995) DBG | domain bridge-422995 has defined IP address 192.168.50.17 and MAC address 52:54:00:41:ff:d3 in network mk-bridge-422995
	I1019 13:11:09.203347  197493 main.go:141] libmachine: (bridge-422995) Calling .GetSSHPort
	I1019 13:11:09.203534  197493 main.go:141] libmachine: (bridge-422995) Calling .GetSSHKeyPath
	I1019 13:11:09.203727  197493 main.go:141] libmachine: (bridge-422995) Calling .GetSSHUsername
	I1019 13:11:09.203856  197493 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/bridge-422995/id_rsa Username:docker}
	I1019 13:11:09.440898  197493 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 13:11:09.564641  197493 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:11:09.786835  197493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 13:11:09.819598  197493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:11:10.298824  197493 start.go:976] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1019 13:11:10.299300  197493 main.go:141] libmachine: Making call to close driver server
	I1019 13:11:10.299337  197493 main.go:141] libmachine: (bridge-422995) Calling .Close
	I1019 13:11:10.299786  197493 main.go:141] libmachine: Successfully made call to close driver server
	I1019 13:11:10.299813  197493 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 13:11:10.299823  197493 main.go:141] libmachine: Making call to close driver server
	I1019 13:11:10.299833  197493 main.go:141] libmachine: (bridge-422995) Calling .Close
	I1019 13:11:10.300382  197493 node_ready.go:35] waiting up to 15m0s for node "bridge-422995" to be "Ready" ...
	I1019 13:11:10.301168  197493 main.go:141] libmachine: (bridge-422995) DBG | Closing plugin on server side
	I1019 13:11:10.301182  197493 main.go:141] libmachine: Successfully made call to close driver server
	I1019 13:11:10.301202  197493 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 13:11:10.321386  197493 node_ready.go:49] node "bridge-422995" is "Ready"
	I1019 13:11:10.321415  197493 node_ready.go:38] duration metric: took 21.003808ms for node "bridge-422995" to be "Ready" ...
	I1019 13:11:10.321433  197493 api_server.go:52] waiting for apiserver process to appear ...
	I1019 13:11:10.321491  197493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 13:11:10.335919  197493 main.go:141] libmachine: Making call to close driver server
	I1019 13:11:10.335945  197493 main.go:141] libmachine: (bridge-422995) Calling .Close
	I1019 13:11:10.336293  197493 main.go:141] libmachine: Successfully made call to close driver server
	I1019 13:11:10.336315  197493 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 13:11:10.336339  197493 main.go:141] libmachine: (bridge-422995) DBG | Closing plugin on server side
	I1019 13:11:10.812875  197493 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-422995" context rescaled to 1 replicas
	I1019 13:11:10.910389  197493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.090756691s)
	I1019 13:11:10.910452  197493 main.go:141] libmachine: Making call to close driver server
	I1019 13:11:10.910468  197493 main.go:141] libmachine: (bridge-422995) Calling .Close
	I1019 13:11:10.910594  197493 api_server.go:72] duration metric: took 1.778498984s to wait for apiserver process to appear ...
	I1019 13:11:10.910619  197493 api_server.go:88] waiting for apiserver healthz status ...
	I1019 13:11:10.910645  197493 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I1019 13:11:10.911819  197493 main.go:141] libmachine: (bridge-422995) DBG | Closing plugin on server side
	I1019 13:11:10.911874  197493 main.go:141] libmachine: Successfully made call to close driver server
	I1019 13:11:10.911885  197493 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 13:11:10.911894  197493 main.go:141] libmachine: Making call to close driver server
	I1019 13:11:10.911905  197493 main.go:141] libmachine: (bridge-422995) Calling .Close
	I1019 13:11:10.912373  197493 main.go:141] libmachine: Successfully made call to close driver server
	I1019 13:11:10.912399  197493 main.go:141] libmachine: Making call to close connection to plugin binary
	I1019 13:11:10.915198  197493 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1019 13:11:10.917623  197493 addons.go:514] duration metric: took 1.7854927s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1019 13:11:10.929883  197493 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
	ok
	I1019 13:11:10.931104  197493 api_server.go:141] control plane version: v1.34.1
	I1019 13:11:10.931128  197493 api_server.go:131] duration metric: took 20.500661ms to wait for apiserver health ...
	I1019 13:11:10.931138  197493 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 13:11:10.943919  197493 system_pods.go:59] 8 kube-system pods found
	I1019 13:11:10.943965  197493 system_pods.go:61] "coredns-66bc5c9577-8n4sk" [23ad2ac5-645d-4d40-8070-958d5ed86f1a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:11:10.943978  197493 system_pods.go:61] "coredns-66bc5c9577-k8s79" [563e59d2-bbfa-470c-b4a1-4bb8c64320ef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:11:10.943999  197493 system_pods.go:61] "etcd-bridge-422995" [9136ef08-cf28-42e0-8b33-c3929e019588] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 13:11:10.944008  197493 system_pods.go:61] "kube-apiserver-bridge-422995" [099f51c8-03c5-4441-aab9-419c6a23e12a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 13:11:10.944014  197493 system_pods.go:61] "kube-controller-manager-bridge-422995" [484b2188-da8c-4189-8ce1-fdb6057b1daa] Running
	I1019 13:11:10.944022  197493 system_pods.go:61] "kube-proxy-7t9qz" [fddd42c3-3922-4bbc-bb98-7539329d9b7d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 13:11:10.944032  197493 system_pods.go:61] "kube-scheduler-bridge-422995" [080d7f92-b2c3-40fa-a271-ed4f8de38d34] Running
	I1019 13:11:10.944039  197493 system_pods.go:61] "storage-provisioner" [2d46b33a-1b47-4835-bd57-3df9a7a1058b] Pending
	I1019 13:11:10.944050  197493 system_pods.go:74] duration metric: took 12.903703ms to wait for pod list to return data ...
	I1019 13:11:10.944063  197493 default_sa.go:34] waiting for default service account to be created ...
	I1019 13:11:10.956489  197493 default_sa.go:45] found service account: "default"
	I1019 13:11:10.956514  197493 default_sa.go:55] duration metric: took 12.442315ms for default service account to be created ...
	I1019 13:11:10.956525  197493 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 13:11:11.039111  197493 system_pods.go:86] 8 kube-system pods found
	I1019 13:11:11.039151  197493 system_pods.go:89] "coredns-66bc5c9577-8n4sk" [23ad2ac5-645d-4d40-8070-958d5ed86f1a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:11:11.039188  197493 system_pods.go:89] "coredns-66bc5c9577-k8s79" [563e59d2-bbfa-470c-b4a1-4bb8c64320ef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:11:11.039198  197493 system_pods.go:89] "etcd-bridge-422995" [9136ef08-cf28-42e0-8b33-c3929e019588] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 13:11:11.039223  197493 system_pods.go:89] "kube-apiserver-bridge-422995" [099f51c8-03c5-4441-aab9-419c6a23e12a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 13:11:11.039231  197493 system_pods.go:89] "kube-controller-manager-bridge-422995" [484b2188-da8c-4189-8ce1-fdb6057b1daa] Running
	I1019 13:11:11.039241  197493 system_pods.go:89] "kube-proxy-7t9qz" [fddd42c3-3922-4bbc-bb98-7539329d9b7d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 13:11:11.039250  197493 system_pods.go:89] "kube-scheduler-bridge-422995" [080d7f92-b2c3-40fa-a271-ed4f8de38d34] Running
	I1019 13:11:11.039259  197493 system_pods.go:89] "storage-provisioner" [2d46b33a-1b47-4835-bd57-3df9a7a1058b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 13:11:11.039317  197493 retry.go:31] will retry after 275.827887ms: missing components: kube-dns, kube-proxy
	I1019 13:11:11.322392  197493 system_pods.go:86] 8 kube-system pods found
	I1019 13:11:11.322432  197493 system_pods.go:89] "coredns-66bc5c9577-8n4sk" [23ad2ac5-645d-4d40-8070-958d5ed86f1a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:11:11.322440  197493 system_pods.go:89] "coredns-66bc5c9577-k8s79" [563e59d2-bbfa-470c-b4a1-4bb8c64320ef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:11:11.322446  197493 system_pods.go:89] "etcd-bridge-422995" [9136ef08-cf28-42e0-8b33-c3929e019588] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 13:11:11.322452  197493 system_pods.go:89] "kube-apiserver-bridge-422995" [099f51c8-03c5-4441-aab9-419c6a23e12a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 13:11:11.322456  197493 system_pods.go:89] "kube-controller-manager-bridge-422995" [484b2188-da8c-4189-8ce1-fdb6057b1daa] Running
	I1019 13:11:11.322461  197493 system_pods.go:89] "kube-proxy-7t9qz" [fddd42c3-3922-4bbc-bb98-7539329d9b7d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 13:11:11.322464  197493 system_pods.go:89] "kube-scheduler-bridge-422995" [080d7f92-b2c3-40fa-a271-ed4f8de38d34] Running
	I1019 13:11:11.322469  197493 system_pods.go:89] "storage-provisioner" [2d46b33a-1b47-4835-bd57-3df9a7a1058b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 13:11:11.322489  197493 retry.go:31] will retry after 322.532497ms: missing components: kube-dns, kube-proxy
	I1019 13:11:11.649686  197493 system_pods.go:86] 8 kube-system pods found
	I1019 13:11:11.649721  197493 system_pods.go:89] "coredns-66bc5c9577-8n4sk" [23ad2ac5-645d-4d40-8070-958d5ed86f1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:11:11.649730  197493 system_pods.go:89] "coredns-66bc5c9577-k8s79" [563e59d2-bbfa-470c-b4a1-4bb8c64320ef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:11:11.649738  197493 system_pods.go:89] "etcd-bridge-422995" [9136ef08-cf28-42e0-8b33-c3929e019588] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 13:11:11.649743  197493 system_pods.go:89] "kube-apiserver-bridge-422995" [099f51c8-03c5-4441-aab9-419c6a23e12a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 13:11:11.649748  197493 system_pods.go:89] "kube-controller-manager-bridge-422995" [484b2188-da8c-4189-8ce1-fdb6057b1daa] Running
	I1019 13:11:11.649752  197493 system_pods.go:89] "kube-proxy-7t9qz" [fddd42c3-3922-4bbc-bb98-7539329d9b7d] Running
	I1019 13:11:11.649755  197493 system_pods.go:89] "kube-scheduler-bridge-422995" [080d7f92-b2c3-40fa-a271-ed4f8de38d34] Running
	I1019 13:11:11.649759  197493 system_pods.go:89] "storage-provisioner" [2d46b33a-1b47-4835-bd57-3df9a7a1058b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 13:11:11.649766  197493 system_pods.go:126] duration metric: took 693.235567ms to wait for k8s-apps to be running ...
	I1019 13:11:11.649774  197493 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 13:11:11.649817  197493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:11:11.667842  197493 system_svc.go:56] duration metric: took 18.055006ms WaitForService to wait for kubelet
	I1019 13:11:11.667872  197493 kubeadm.go:586] duration metric: took 2.535783289s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:11:11.667889  197493 node_conditions.go:102] verifying NodePressure condition ...
	I1019 13:11:11.671369  197493 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1019 13:11:11.671406  197493 node_conditions.go:123] node cpu capacity is 2
	I1019 13:11:11.671421  197493 node_conditions.go:105] duration metric: took 3.526063ms to run NodePressure ...
	I1019 13:11:11.671437  197493 start.go:241] waiting for startup goroutines ...
	I1019 13:11:11.671451  197493 start.go:246] waiting for cluster config update ...
	I1019 13:11:11.671468  197493 start.go:255] writing updated cluster config ...
	I1019 13:11:11.671718  197493 ssh_runner.go:195] Run: rm -f paused
	I1019 13:11:11.677849  197493 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 13:11:11.681429  197493 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8n4sk" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:11:10.543698  199330 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1019 13:11:10.543874  199330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 13:11:10.543934  199330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 13:11:10.557983  199330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34983
	I1019 13:11:10.558688  199330 main.go:141] libmachine: () Calling .GetVersion
	I1019 13:11:10.559444  199330 main.go:141] libmachine: Using API Version  1
	I1019 13:11:10.559472  199330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 13:11:10.559883  199330 main.go:141] libmachine: () Calling .GetMachineName
	I1019 13:11:10.560098  199330 main.go:141] libmachine: (old-k8s-version-725412) Calling .GetMachineName
	I1019 13:11:10.560327  199330 main.go:141] libmachine: (old-k8s-version-725412) Calling .DriverName
	I1019 13:11:10.560536  199330 start.go:159] libmachine.API.Create for "old-k8s-version-725412" (driver="kvm2")
	I1019 13:11:10.560572  199330 client.go:168] LocalClient.Create starting
	I1019 13:11:10.560607  199330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-144655/.minikube/certs/ca.pem
	I1019 13:11:10.560647  199330 main.go:141] libmachine: Decoding PEM data...
	I1019 13:11:10.560667  199330 main.go:141] libmachine: Parsing certificate...
	I1019 13:11:10.560732  199330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-144655/.minikube/certs/cert.pem
	I1019 13:11:10.560763  199330 main.go:141] libmachine: Decoding PEM data...
	I1019 13:11:10.560775  199330 main.go:141] libmachine: Parsing certificate...
	I1019 13:11:10.560818  199330 main.go:141] libmachine: Running pre-create checks...
	I1019 13:11:10.560832  199330 main.go:141] libmachine: (old-k8s-version-725412) Calling .PreCreateCheck
	I1019 13:11:10.561342  199330 main.go:141] libmachine: (old-k8s-version-725412) Calling .GetConfigRaw
	I1019 13:11:10.561807  199330 main.go:141] libmachine: Creating machine...
	I1019 13:11:10.561823  199330 main.go:141] libmachine: (old-k8s-version-725412) Calling .Create
	I1019 13:11:10.562021  199330 main.go:141] libmachine: (old-k8s-version-725412) creating domain...
	I1019 13:11:10.562047  199330 main.go:141] libmachine: (old-k8s-version-725412) creating network...
	I1019 13:11:10.563670  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | found existing default network
	I1019 13:11:10.563861  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | <network connections='3'>
	I1019 13:11:10.563883  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <name>default</name>
	I1019 13:11:10.563908  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1019 13:11:10.563922  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <forward mode='nat'>
	I1019 13:11:10.563934  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <nat>
	I1019 13:11:10.563947  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <port start='1024' end='65535'/>
	I1019 13:11:10.563960  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     </nat>
	I1019 13:11:10.563970  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   </forward>
	I1019 13:11:10.563981  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1019 13:11:10.563993  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1019 13:11:10.564001  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1019 13:11:10.564007  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <dhcp>
	I1019 13:11:10.564060  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1019 13:11:10.564093  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     </dhcp>
	I1019 13:11:10.564116  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   </ip>
	I1019 13:11:10.564123  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | </network>
	I1019 13:11:10.564134  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | 
	I1019 13:11:10.565064  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | I1019 13:11:10.564793  199357 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d1:77:44} reservation:<nil>}
	I1019 13:11:10.565651  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | I1019 13:11:10.565555  199357 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:93:5f:69} reservation:<nil>}
	I1019 13:11:10.566445  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | I1019 13:11:10.566335  199357 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000322380}
	I1019 13:11:10.566475  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | defining private network:
	I1019 13:11:10.566494  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | 
	I1019 13:11:10.566507  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | <network>
	I1019 13:11:10.566520  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <name>mk-old-k8s-version-725412</name>
	I1019 13:11:10.566533  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <dns enable='no'/>
	I1019 13:11:10.566552  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1019 13:11:10.566570  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <dhcp>
	I1019 13:11:10.566583  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1019 13:11:10.566600  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     </dhcp>
	I1019 13:11:10.566629  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   </ip>
	I1019 13:11:10.566648  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | </network>
	I1019 13:11:10.566672  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | 
	I1019 13:11:10.572145  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | creating private network mk-old-k8s-version-725412 192.168.61.0/24...
	I1019 13:11:10.658804  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | private network mk-old-k8s-version-725412 192.168.61.0/24 created
	I1019 13:11:10.659114  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | <network>
	I1019 13:11:10.659143  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <name>mk-old-k8s-version-725412</name>
	I1019 13:11:10.659163  199330 main.go:141] libmachine: (old-k8s-version-725412) setting up store path in /home/jenkins/minikube-integration/21772-144655/.minikube/machines/old-k8s-version-725412 ...
	I1019 13:11:10.659183  199330 main.go:141] libmachine: (old-k8s-version-725412) building disk image from file:///home/jenkins/minikube-integration/21772-144655/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1019 13:11:10.659216  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <uuid>114de51a-016b-4ad2-b145-6c073be5dced</uuid>
	I1019 13:11:10.659237  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <bridge name='virbr3' stp='on' delay='0'/>
	I1019 13:11:10.659248  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <mac address='52:54:00:c6:7f:c9'/>
	I1019 13:11:10.659256  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <dns enable='no'/>
	I1019 13:11:10.659267  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1019 13:11:10.659274  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <dhcp>
	I1019 13:11:10.659302  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1019 13:11:10.659315  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     </dhcp>
	I1019 13:11:10.659324  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   </ip>
	I1019 13:11:10.659331  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | </network>
	I1019 13:11:10.659344  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | 
	I1019 13:11:10.659361  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | I1019 13:11:10.659112  199357 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21772-144655/.minikube
	I1019 13:11:10.660111  199330 main.go:141] libmachine: (old-k8s-version-725412) Downloading /home/jenkins/minikube-integration/21772-144655/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21772-144655/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1019 13:11:10.969109  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | I1019 13:11:10.968961  199357 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21772-144655/.minikube/machines/old-k8s-version-725412/id_rsa...
	I1019 13:11:11.201358  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | I1019 13:11:11.201220  199357 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21772-144655/.minikube/machines/old-k8s-version-725412/old-k8s-version-725412.rawdisk...
	I1019 13:11:11.201383  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | Writing magic tar header
	I1019 13:11:11.201412  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | Writing SSH key tar header
	I1019 13:11:11.201439  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | I1019 13:11:11.201365  199357 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21772-144655/.minikube/machines/old-k8s-version-725412 ...
	I1019 13:11:11.201493  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21772-144655/.minikube/machines/old-k8s-version-725412
	I1019 13:11:11.201571  199330 main.go:141] libmachine: (old-k8s-version-725412) setting executable bit set on /home/jenkins/minikube-integration/21772-144655/.minikube/machines/old-k8s-version-725412 (perms=drwx------)
	I1019 13:11:11.201613  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21772-144655/.minikube/machines
	I1019 13:11:11.201629  199330 main.go:141] libmachine: (old-k8s-version-725412) setting executable bit set on /home/jenkins/minikube-integration/21772-144655/.minikube/machines (perms=drwxr-xr-x)
	I1019 13:11:11.201646  199330 main.go:141] libmachine: (old-k8s-version-725412) setting executable bit set on /home/jenkins/minikube-integration/21772-144655/.minikube (perms=drwxr-xr-x)
	I1019 13:11:11.201659  199330 main.go:141] libmachine: (old-k8s-version-725412) setting executable bit set on /home/jenkins/minikube-integration/21772-144655 (perms=drwxrwxr-x)
	I1019 13:11:11.201672  199330 main.go:141] libmachine: (old-k8s-version-725412) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1019 13:11:11.201687  199330 main.go:141] libmachine: (old-k8s-version-725412) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1019 13:11:11.201696  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21772-144655/.minikube
	I1019 13:11:11.201709  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21772-144655
	I1019 13:11:11.201721  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1019 13:11:11.201734  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | checking permissions on dir: /home/jenkins
	I1019 13:11:11.201744  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | checking permissions on dir: /home
	I1019 13:11:11.201756  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | skipping /home - not owner
	I1019 13:11:11.201786  199330 main.go:141] libmachine: (old-k8s-version-725412) defining domain...
	I1019 13:11:11.203003  199330 main.go:141] libmachine: (old-k8s-version-725412) defining domain using XML: 
	I1019 13:11:11.203040  199330 main.go:141] libmachine: (old-k8s-version-725412) <domain type='kvm'>
	I1019 13:11:11.203067  199330 main.go:141] libmachine: (old-k8s-version-725412)   <name>old-k8s-version-725412</name>
	I1019 13:11:11.203086  199330 main.go:141] libmachine: (old-k8s-version-725412)   <memory unit='MiB'>3072</memory>
	I1019 13:11:11.203111  199330 main.go:141] libmachine: (old-k8s-version-725412)   <vcpu>2</vcpu>
	I1019 13:11:11.203128  199330 main.go:141] libmachine: (old-k8s-version-725412)   <features>
	I1019 13:11:11.203135  199330 main.go:141] libmachine: (old-k8s-version-725412)     <acpi/>
	I1019 13:11:11.203144  199330 main.go:141] libmachine: (old-k8s-version-725412)     <apic/>
	I1019 13:11:11.203153  199330 main.go:141] libmachine: (old-k8s-version-725412)     <pae/>
	I1019 13:11:11.203163  199330 main.go:141] libmachine: (old-k8s-version-725412)   </features>
	I1019 13:11:11.203173  199330 main.go:141] libmachine: (old-k8s-version-725412)   <cpu mode='host-passthrough'>
	I1019 13:11:11.203183  199330 main.go:141] libmachine: (old-k8s-version-725412)   </cpu>
	I1019 13:11:11.203190  199330 main.go:141] libmachine: (old-k8s-version-725412)   <os>
	I1019 13:11:11.203197  199330 main.go:141] libmachine: (old-k8s-version-725412)     <type>hvm</type>
	I1019 13:11:11.203202  199330 main.go:141] libmachine: (old-k8s-version-725412)     <boot dev='cdrom'/>
	I1019 13:11:11.203206  199330 main.go:141] libmachine: (old-k8s-version-725412)     <boot dev='hd'/>
	I1019 13:11:11.203211  199330 main.go:141] libmachine: (old-k8s-version-725412)     <bootmenu enable='no'/>
	I1019 13:11:11.203227  199330 main.go:141] libmachine: (old-k8s-version-725412)   </os>
	I1019 13:11:11.203235  199330 main.go:141] libmachine: (old-k8s-version-725412)   <devices>
	I1019 13:11:11.203242  199330 main.go:141] libmachine: (old-k8s-version-725412)     <disk type='file' device='cdrom'>
	I1019 13:11:11.203260  199330 main.go:141] libmachine: (old-k8s-version-725412)       <source file='/home/jenkins/minikube-integration/21772-144655/.minikube/machines/old-k8s-version-725412/boot2docker.iso'/>
	I1019 13:11:11.203271  199330 main.go:141] libmachine: (old-k8s-version-725412)       <target dev='hdc' bus='scsi'/>
	I1019 13:11:11.203300  199330 main.go:141] libmachine: (old-k8s-version-725412)       <readonly/>
	I1019 13:11:11.203316  199330 main.go:141] libmachine: (old-k8s-version-725412)     </disk>
	I1019 13:11:11.203327  199330 main.go:141] libmachine: (old-k8s-version-725412)     <disk type='file' device='disk'>
	I1019 13:11:11.203351  199330 main.go:141] libmachine: (old-k8s-version-725412)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1019 13:11:11.203368  199330 main.go:141] libmachine: (old-k8s-version-725412)       <source file='/home/jenkins/minikube-integration/21772-144655/.minikube/machines/old-k8s-version-725412/old-k8s-version-725412.rawdisk'/>
	I1019 13:11:11.203377  199330 main.go:141] libmachine: (old-k8s-version-725412)       <target dev='hda' bus='virtio'/>
	I1019 13:11:11.203382  199330 main.go:141] libmachine: (old-k8s-version-725412)     </disk>
	I1019 13:11:11.203388  199330 main.go:141] libmachine: (old-k8s-version-725412)     <interface type='network'>
	I1019 13:11:11.203407  199330 main.go:141] libmachine: (old-k8s-version-725412)       <source network='mk-old-k8s-version-725412'/>
	I1019 13:11:11.203427  199330 main.go:141] libmachine: (old-k8s-version-725412)       <model type='virtio'/>
	I1019 13:11:11.203440  199330 main.go:141] libmachine: (old-k8s-version-725412)     </interface>
	I1019 13:11:11.203449  199330 main.go:141] libmachine: (old-k8s-version-725412)     <interface type='network'>
	I1019 13:11:11.203455  199330 main.go:141] libmachine: (old-k8s-version-725412)       <source network='default'/>
	I1019 13:11:11.203462  199330 main.go:141] libmachine: (old-k8s-version-725412)       <model type='virtio'/>
	I1019 13:11:11.203468  199330 main.go:141] libmachine: (old-k8s-version-725412)     </interface>
	I1019 13:11:11.203475  199330 main.go:141] libmachine: (old-k8s-version-725412)     <serial type='pty'>
	I1019 13:11:11.203480  199330 main.go:141] libmachine: (old-k8s-version-725412)       <target port='0'/>
	I1019 13:11:11.203487  199330 main.go:141] libmachine: (old-k8s-version-725412)     </serial>
	I1019 13:11:11.203492  199330 main.go:141] libmachine: (old-k8s-version-725412)     <console type='pty'>
	I1019 13:11:11.203500  199330 main.go:141] libmachine: (old-k8s-version-725412)       <target type='serial' port='0'/>
	I1019 13:11:11.203505  199330 main.go:141] libmachine: (old-k8s-version-725412)     </console>
	I1019 13:11:11.203512  199330 main.go:141] libmachine: (old-k8s-version-725412)     <rng model='virtio'>
	I1019 13:11:11.203518  199330 main.go:141] libmachine: (old-k8s-version-725412)       <backend model='random'>/dev/random</backend>
	I1019 13:11:11.203534  199330 main.go:141] libmachine: (old-k8s-version-725412)     </rng>
	I1019 13:11:11.203542  199330 main.go:141] libmachine: (old-k8s-version-725412)   </devices>
	I1019 13:11:11.203546  199330 main.go:141] libmachine: (old-k8s-version-725412) </domain>
	I1019 13:11:11.203555  199330 main.go:141] libmachine: (old-k8s-version-725412) 
	I1019 13:11:11.208197  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | domain old-k8s-version-725412 has defined MAC address 52:54:00:da:80:71 in network default
	I1019 13:11:11.208780  199330 main.go:141] libmachine: (old-k8s-version-725412) starting domain...
	I1019 13:11:11.208803  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | domain old-k8s-version-725412 has defined MAC address 52:54:00:fc:05:d2 in network mk-old-k8s-version-725412
	I1019 13:11:11.208809  199330 main.go:141] libmachine: (old-k8s-version-725412) ensuring networks are active...
	I1019 13:11:11.209513  199330 main.go:141] libmachine: (old-k8s-version-725412) Ensuring network default is active
	I1019 13:11:11.209875  199330 main.go:141] libmachine: (old-k8s-version-725412) Ensuring network mk-old-k8s-version-725412 is active
	I1019 13:11:11.210449  199330 main.go:141] libmachine: (old-k8s-version-725412) getting domain XML...
	I1019 13:11:11.211581  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | starting domain XML:
	I1019 13:11:11.211606  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | <domain type='kvm'>
	I1019 13:11:11.211619  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <name>old-k8s-version-725412</name>
	I1019 13:11:11.211630  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <uuid>72d8e2d5-87b6-416d-ba55-c93cfff0950e</uuid>
	I1019 13:11:11.211646  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <memory unit='KiB'>3145728</memory>
	I1019 13:11:11.211655  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1019 13:11:11.211665  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <vcpu placement='static'>2</vcpu>
	I1019 13:11:11.211676  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <os>
	I1019 13:11:11.211699  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1019 13:11:11.211721  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <boot dev='cdrom'/>
	I1019 13:11:11.211730  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <boot dev='hd'/>
	I1019 13:11:11.211743  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <bootmenu enable='no'/>
	I1019 13:11:11.211752  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   </os>
	I1019 13:11:11.211761  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <features>
	I1019 13:11:11.211770  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <acpi/>
	I1019 13:11:11.211779  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <apic/>
	I1019 13:11:11.211796  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <pae/>
	I1019 13:11:11.211820  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   </features>
	I1019 13:11:11.211837  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1019 13:11:11.211846  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <clock offset='utc'/>
	I1019 13:11:11.211861  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <on_poweroff>destroy</on_poweroff>
	I1019 13:11:11.211871  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <on_reboot>restart</on_reboot>
	I1019 13:11:11.211882  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <on_crash>destroy</on_crash>
	I1019 13:11:11.211890  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   <devices>
	I1019 13:11:11.211903  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1019 13:11:11.211917  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <disk type='file' device='cdrom'>
	I1019 13:11:11.211934  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <driver name='qemu' type='raw'/>
	I1019 13:11:11.211956  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <source file='/home/jenkins/minikube-integration/21772-144655/.minikube/machines/old-k8s-version-725412/boot2docker.iso'/>
	I1019 13:11:11.211972  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <target dev='hdc' bus='scsi'/>
	I1019 13:11:11.211991  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <readonly/>
	I1019 13:11:11.212005  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1019 13:11:11.212011  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     </disk>
	I1019 13:11:11.212036  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <disk type='file' device='disk'>
	I1019 13:11:11.212057  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1019 13:11:11.212085  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <source file='/home/jenkins/minikube-integration/21772-144655/.minikube/machines/old-k8s-version-725412/old-k8s-version-725412.rawdisk'/>
	I1019 13:11:11.212097  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <target dev='hda' bus='virtio'/>
	I1019 13:11:11.212109  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1019 13:11:11.212120  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     </disk>
	I1019 13:11:11.212130  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1019 13:11:11.212140  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1019 13:11:11.212155  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     </controller>
	I1019 13:11:11.212173  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1019 13:11:11.212182  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1019 13:11:11.212192  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1019 13:11:11.212219  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     </controller>
	I1019 13:11:11.212241  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <interface type='network'>
	I1019 13:11:11.212254  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <mac address='52:54:00:fc:05:d2'/>
	I1019 13:11:11.212267  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <source network='mk-old-k8s-version-725412'/>
	I1019 13:11:11.212296  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <model type='virtio'/>
	I1019 13:11:11.212315  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1019 13:11:11.212326  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     </interface>
	I1019 13:11:11.212332  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <interface type='network'>
	I1019 13:11:11.212346  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <mac address='52:54:00:da:80:71'/>
	I1019 13:11:11.212358  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <source network='default'/>
	I1019 13:11:11.212371  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <model type='virtio'/>
	I1019 13:11:11.212391  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1019 13:11:11.212406  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     </interface>
	I1019 13:11:11.212435  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <serial type='pty'>
	I1019 13:11:11.212448  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <target type='isa-serial' port='0'>
	I1019 13:11:11.212457  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |         <model name='isa-serial'/>
	I1019 13:11:11.212465  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       </target>
	I1019 13:11:11.212486  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     </serial>
	I1019 13:11:11.212509  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <console type='pty'>
	I1019 13:11:11.212525  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <target type='serial' port='0'/>
	I1019 13:11:11.212548  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     </console>
	I1019 13:11:11.212569  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <input type='mouse' bus='ps2'/>
	I1019 13:11:11.212583  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <input type='keyboard' bus='ps2'/>
	I1019 13:11:11.212594  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <audio id='1' type='none'/>
	I1019 13:11:11.212607  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <memballoon model='virtio'>
	I1019 13:11:11.212619  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1019 13:11:11.212630  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     </memballoon>
	I1019 13:11:11.212642  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     <rng model='virtio'>
	I1019 13:11:11.212659  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <backend model='random'>/dev/random</backend>
	I1019 13:11:11.212675  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1019 13:11:11.212687  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |     </rng>
	I1019 13:11:11.212694  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG |   </devices>
	I1019 13:11:11.212704  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | </domain>
	I1019 13:11:11.212710  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | 
	I1019 13:11:12.535720  199330 main.go:141] libmachine: (old-k8s-version-725412) waiting for domain to start...
	I1019 13:11:12.537366  199330 main.go:141] libmachine: (old-k8s-version-725412) domain is now running
	I1019 13:11:12.537390  199330 main.go:141] libmachine: (old-k8s-version-725412) waiting for IP...
	I1019 13:11:12.538403  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | domain old-k8s-version-725412 has defined MAC address 52:54:00:fc:05:d2 in network mk-old-k8s-version-725412
	I1019 13:11:12.539215  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | no network interface addresses found for domain old-k8s-version-725412 (source=lease)
	I1019 13:11:12.539241  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | trying to list again with source=arp
	I1019 13:11:12.539628  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | unable to find current IP address of domain old-k8s-version-725412 in network mk-old-k8s-version-725412 (interfaces detected: [])
	I1019 13:11:12.539747  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | I1019 13:11:12.539664  199357 retry.go:31] will retry after 224.800382ms: waiting for domain to come up
	I1019 13:11:12.766213  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | domain old-k8s-version-725412 has defined MAC address 52:54:00:fc:05:d2 in network mk-old-k8s-version-725412
	I1019 13:11:12.767013  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | no network interface addresses found for domain old-k8s-version-725412 (source=lease)
	I1019 13:11:12.767041  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | trying to list again with source=arp
	I1019 13:11:12.767430  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | unable to find current IP address of domain old-k8s-version-725412 in network mk-old-k8s-version-725412 (interfaces detected: [])
	I1019 13:11:12.767462  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | I1019 13:11:12.767389  199357 retry.go:31] will retry after 249.167123ms: waiting for domain to come up
	I1019 13:11:13.017862  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | domain old-k8s-version-725412 has defined MAC address 52:54:00:fc:05:d2 in network mk-old-k8s-version-725412
	I1019 13:11:13.018658  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | no network interface addresses found for domain old-k8s-version-725412 (source=lease)
	I1019 13:11:13.018686  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | trying to list again with source=arp
	I1019 13:11:13.019099  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | unable to find current IP address of domain old-k8s-version-725412 in network mk-old-k8s-version-725412 (interfaces detected: [])
	I1019 13:11:13.019127  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | I1019 13:11:13.019069  199357 retry.go:31] will retry after 399.30721ms: waiting for domain to come up
	I1019 13:11:13.420617  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | domain old-k8s-version-725412 has defined MAC address 52:54:00:fc:05:d2 in network mk-old-k8s-version-725412
	I1019 13:11:13.421418  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | no network interface addresses found for domain old-k8s-version-725412 (source=lease)
	I1019 13:11:13.421440  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | trying to list again with source=arp
	I1019 13:11:13.421815  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | unable to find current IP address of domain old-k8s-version-725412 in network mk-old-k8s-version-725412 (interfaces detected: [])
	I1019 13:11:13.421865  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | I1019 13:11:13.421797  199357 retry.go:31] will retry after 520.355471ms: waiting for domain to come up
	I1019 13:11:13.943548  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | domain old-k8s-version-725412 has defined MAC address 52:54:00:fc:05:d2 in network mk-old-k8s-version-725412
	I1019 13:11:13.944336  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | no network interface addresses found for domain old-k8s-version-725412 (source=lease)
	I1019 13:11:13.944358  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | trying to list again with source=arp
	I1019 13:11:13.944711  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | unable to find current IP address of domain old-k8s-version-725412 in network mk-old-k8s-version-725412 (interfaces detected: [])
	I1019 13:11:13.944741  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | I1019 13:11:13.944688  199357 retry.go:31] will retry after 658.783419ms: waiting for domain to come up
	I1019 13:11:14.605676  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | domain old-k8s-version-725412 has defined MAC address 52:54:00:fc:05:d2 in network mk-old-k8s-version-725412
	I1019 13:11:14.606317  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | no network interface addresses found for domain old-k8s-version-725412 (source=lease)
	I1019 13:11:14.606370  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | trying to list again with source=arp
	I1019 13:11:14.606734  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | unable to find current IP address of domain old-k8s-version-725412 in network mk-old-k8s-version-725412 (interfaces detected: [])
	I1019 13:11:14.606852  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | I1019 13:11:14.606739  199357 retry.go:31] will retry after 780.221203ms: waiting for domain to come up
	I1019 13:11:15.388653  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | domain old-k8s-version-725412 has defined MAC address 52:54:00:fc:05:d2 in network mk-old-k8s-version-725412
	I1019 13:11:15.389513  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | no network interface addresses found for domain old-k8s-version-725412 (source=lease)
	I1019 13:11:15.389544  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | trying to list again with source=arp
	I1019 13:11:15.389931  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | unable to find current IP address of domain old-k8s-version-725412 in network mk-old-k8s-version-725412 (interfaces detected: [])
	I1019 13:11:15.389958  199330 main.go:141] libmachine: (old-k8s-version-725412) DBG | I1019 13:11:15.389911  199357 retry.go:31] will retry after 945.772395ms: waiting for domain to come up
	W1019 13:11:12.915034  187539 pod_ready.go:104] pod "kube-scheduler-pause-969331" is not "Ready", error: <nil>
	W1019 13:11:14.915369  187539 pod_ready.go:104] pod "kube-scheduler-pause-969331" is not "Ready", error: <nil>
	W1019 13:11:13.689192  197493 pod_ready.go:104] pod "coredns-66bc5c9577-8n4sk" is not "Ready", error: <nil>
	W1019 13:11:16.189234  197493 pod_ready.go:104] pod "coredns-66bc5c9577-8n4sk" is not "Ready", error: <nil>
	W1019 13:11:17.415707  187539 pod_ready.go:104] pod "kube-scheduler-pause-969331" is not "Ready", error: <nil>
	W1019 13:11:19.507978  187539 pod_ready.go:104] pod "kube-scheduler-pause-969331" is not "Ready", error: <nil>
	I1019 13:11:19.988352  187539 pod_ready.go:86] duration metric: took 3m50.079418687s for pod "kube-scheduler-pause-969331" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 13:11:19.988396  187539 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-scheduler" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1019 13:11:19.988424  187539 pod_ready.go:40] duration metric: took 4m0.000764129s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 13:11:19.990438  187539 out.go:203] 
	W1019 13:11:19.991523  187539 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1019 13:11:19.992519  187539 out.go:203] 
	
	
	==> CRI-O <==
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.718546986Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760879480718522353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34148a08-d742-498c-80f4-2b7232a5c226 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.719341218Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=602c9579-51fb-4d12-bfb2-7ffae4fb2b4b name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.719443987Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=602c9579-51fb-4d12-bfb2-7ffae4fb2b4b name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.719767404Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0742d791fd1b94132b804df7328815e9094a0e41c8c31aafc8d4a615f004538,PodSandboxId:6507d2431c621b4aaa5e71e690121c06a801d1663aaf9a3578861472db8480a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760879238691328919,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9t46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 758a5741-a495-4a52-a8ae-719bdb827876,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fbc9bd04490b95cfcd6a361cd29ee1206ab1ba0e61a4f89dbb1328a8c39bf4,PodSandboxId:d2ae110fd76301b33644d8e70b876b0af97de3765b15148deed85521a7b1a0ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760879238500236299,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mz52t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4f2acb3-4b08-47de-9c4a-13a1702fdc26,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5a9e1a9ac3f7b614d51f28ce6a2fa6b67acff73bf8f5720f4be54b9093e5ad,PodSandboxId:9818c5b3513acdcfbf98b777deefb5b17ac9b132e12518a39f3820660326071b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760879235041980738,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60e1fe3cf1691827338ea468bb9d85a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e657f7b629ca5d30e4c69e4495631626e678877985c8653e8ea927771ca751bc,PodSandboxId:981d189f901228d3467c1005fd97c35f79b257c3f5edcb6f1b29856576ee7499,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760879234832722828,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71bb7d57dca0d489a37a8b22b2830f1b,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e27df0f866c7f3b7961d960e6d2263a8501f354ffa69bb136c33db5675574e,PodSandboxId:0c1d6b90e830a24285d6d9404c9dd1732f98b07525299ad007a13d621365e3ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760879234809275716,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe28a219ba25e236fd3e2cb4bcb9abbd,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deda0604a3881aa38c018190951670acd23bad65ab7094a72efb375d7e639be1,PodSandboxId:7a46cc6ba8c3430520f329079930fad973530dc8fe3214a2546779d5cee1b6be,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367c
c9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760879138817949847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mz52t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4f2acb3-4b08-47de-9c4a-13a1702fdc26,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f49c8da32752ce8e7e7393d89714ed081e91555e7802492d22a65660214b818,PodSandboxId:e0d80d7d1c33eff8e52243f7682223b005a82e0e5938a5d63e7a8a1715b0785f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760879138107763940,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe28a219ba25e236fd3e2cb4bcb9abbd,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c39b4c3bb62fe2128157d993c07111b9ae8927d89c6084d00a898bd3337ea481,PodSandboxId:b663e6957915a911bec06e79d717fe630ca628aedd04df73c81449c2e83089cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760879137965609918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60e1fe3cf1691827338ea468bb9d85a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\
"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc4b2cebaf6604f4aa3db750ccb0ee244435bfc9f3423405c92ff4efcb85678,PodSandboxId:4489cd98ec70508828593f2a3e458003ebbcf668c1dbe68d54d788624c7c625b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760879137974870644,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71bb7d57dca0d489a37a8b22b2830f1b,}
,Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c17deb73ba4849d3a92a56b2679a6e4acd0cc221356c77c3b99be352245a13f,PodSandboxId:a5c64ab5021e6ace5a214fb83abccd3347a369f3a6949e3745c000002edab93f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760879137736172009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9t46,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 758a5741-a495-4a52-a8ae-719bdb827876,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fdd6ec490453afde767a1798ad245ffe39bcf071d0ec0ab2e89807b2c967ae1,PodSandboxId:3251d2ccbc8b16c3dd841a518e0a3421d2f133715d287c8b60c77bcc7331aab7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760879055142662374,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-969331,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: f23032c31e306cec4ba2533075186454,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=602c9579-51fb-4d12-bfb2-7ffae4fb2b4b name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.761668021Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69044e98-3b42-4729-8466-5c1022d40575 name=/runtime.v1.RuntimeService/Version
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.761754044Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69044e98-3b42-4729-8466-5c1022d40575 name=/runtime.v1.RuntimeService/Version
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.763015726Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=478fb70b-c6e5-4919-8a2d-de232c125f17 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.763462874Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760879480763442232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=478fb70b-c6e5-4919-8a2d-de232c125f17 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.764039713Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f27377b-e252-498b-b049-9cc2d4bae04e name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.764094266Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f27377b-e252-498b-b049-9cc2d4bae04e name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.764888194Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0742d791fd1b94132b804df7328815e9094a0e41c8c31aafc8d4a615f004538,PodSandboxId:6507d2431c621b4aaa5e71e690121c06a801d1663aaf9a3578861472db8480a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760879238691328919,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9t46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 758a5741-a495-4a52-a8ae-719bdb827876,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fbc9bd04490b95cfcd6a361cd29ee1206ab1ba0e61a4f89dbb1328a8c39bf4,PodSandboxId:d2ae110fd76301b33644d8e70b876b0af97de3765b15148deed85521a7b1a0ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760879238500236299,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mz52t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4f2acb3-4b08-47de-9c4a-13a1702fdc26,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5a9e1a9ac3f7b614d51f28ce6a2fa6b67acff73bf8f5720f4be54b9093e5ad,PodSandboxId:9818c5b3513acdcfbf98b777deefb5b17ac9b132e12518a39f3820660326071b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760879235041980738,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60e1fe3cf1691827338ea468bb9d85a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e657f7b629ca5d30e4c69e4495631626e678877985c8653e8ea927771ca751bc,PodSandboxId:981d189f901228d3467c1005fd97c35f79b257c3f5edcb6f1b29856576ee7499,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760879234832722828,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71bb7d57dca0d489a37a8b22b2830f1b,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e27df0f866c7f3b7961d960e6d2263a8501f354ffa69bb136c33db5675574e,PodSandboxId:0c1d6b90e830a24285d6d9404c9dd1732f98b07525299ad007a13d621365e3ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760879234809275716,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe28a219ba25e236fd3e2cb4bcb9abbd,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deda0604a3881aa38c018190951670acd23bad65ab7094a72efb375d7e639be1,PodSandboxId:7a46cc6ba8c3430520f329079930fad973530dc8fe3214a2546779d5cee1b6be,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367c
c9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760879138817949847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mz52t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4f2acb3-4b08-47de-9c4a-13a1702fdc26,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f49c8da32752ce8e7e7393d89714ed081e91555e7802492d22a65660214b818,PodSandboxId:e0d80d7d1c33eff8e52243f7682223b005a82e0e5938a5d63e7a8a1715b0785f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760879138107763940,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe28a219ba25e236fd3e2cb4bcb9abbd,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c39b4c3bb62fe2128157d993c07111b9ae8927d89c6084d00a898bd3337ea481,PodSandboxId:b663e6957915a911bec06e79d717fe630ca628aedd04df73c81449c2e83089cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760879137965609918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60e1fe3cf1691827338ea468bb9d85a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\
"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc4b2cebaf6604f4aa3db750ccb0ee244435bfc9f3423405c92ff4efcb85678,PodSandboxId:4489cd98ec70508828593f2a3e458003ebbcf668c1dbe68d54d788624c7c625b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760879137974870644,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71bb7d57dca0d489a37a8b22b2830f1b,}
,Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c17deb73ba4849d3a92a56b2679a6e4acd0cc221356c77c3b99be352245a13f,PodSandboxId:a5c64ab5021e6ace5a214fb83abccd3347a369f3a6949e3745c000002edab93f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760879137736172009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9t46,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 758a5741-a495-4a52-a8ae-719bdb827876,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fdd6ec490453afde767a1798ad245ffe39bcf071d0ec0ab2e89807b2c967ae1,PodSandboxId:3251d2ccbc8b16c3dd841a518e0a3421d2f133715d287c8b60c77bcc7331aab7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760879055142662374,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-969331,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: f23032c31e306cec4ba2533075186454,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f27377b-e252-498b-b049-9cc2d4bae04e name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.824450089Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=712e3cc3-d997-4ef8-b1c5-6656e1a8f7f4 name=/runtime.v1.RuntimeService/Version
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.825195608Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=712e3cc3-d997-4ef8-b1c5-6656e1a8f7f4 name=/runtime.v1.RuntimeService/Version
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.827764123Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=041036f8-107d-49ef-810a-08338ee2ad7d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.828848837Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760879480828814888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=041036f8-107d-49ef-810a-08338ee2ad7d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.830081326Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7db61d8f-e5c2-4b61-8512-8923c6adb517 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.830163902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7db61d8f-e5c2-4b61-8512-8923c6adb517 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.830498676Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0742d791fd1b94132b804df7328815e9094a0e41c8c31aafc8d4a615f004538,PodSandboxId:6507d2431c621b4aaa5e71e690121c06a801d1663aaf9a3578861472db8480a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760879238691328919,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9t46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 758a5741-a495-4a52-a8ae-719bdb827876,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fbc9bd04490b95cfcd6a361cd29ee1206ab1ba0e61a4f89dbb1328a8c39bf4,PodSandboxId:d2ae110fd76301b33644d8e70b876b0af97de3765b15148deed85521a7b1a0ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760879238500236299,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mz52t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4f2acb3-4b08-47de-9c4a-13a1702fdc26,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5a9e1a9ac3f7b614d51f28ce6a2fa6b67acff73bf8f5720f4be54b9093e5ad,PodSandboxId:9818c5b3513acdcfbf98b777deefb5b17ac9b132e12518a39f3820660326071b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760879235041980738,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60e1fe3cf1691827338ea468bb9d85a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e657f7b629ca5d30e4c69e4495631626e678877985c8653e8ea927771ca751bc,PodSandboxId:981d189f901228d3467c1005fd97c35f79b257c3f5edcb6f1b29856576ee7499,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760879234832722828,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71bb7d57dca0d489a37a8b22b2830f1b,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e27df0f866c7f3b7961d960e6d2263a8501f354ffa69bb136c33db5675574e,PodSandboxId:0c1d6b90e830a24285d6d9404c9dd1732f98b07525299ad007a13d621365e3ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760879234809275716,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe28a219ba25e236fd3e2cb4bcb9abbd,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deda0604a3881aa38c018190951670acd23bad65ab7094a72efb375d7e639be1,PodSandboxId:7a46cc6ba8c3430520f329079930fad973530dc8fe3214a2546779d5cee1b6be,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367c
c9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760879138817949847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mz52t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4f2acb3-4b08-47de-9c4a-13a1702fdc26,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f49c8da32752ce8e7e7393d89714ed081e91555e7802492d22a65660214b818,PodSandboxId:e0d80d7d1c33eff8e52243f7682223b005a82e0e5938a5d63e7a8a1715b0785f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760879138107763940,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe28a219ba25e236fd3e2cb4bcb9abbd,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c39b4c3bb62fe2128157d993c07111b9ae8927d89c6084d00a898bd3337ea481,PodSandboxId:b663e6957915a911bec06e79d717fe630ca628aedd04df73c81449c2e83089cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760879137965609918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60e1fe3cf1691827338ea468bb9d85a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\
"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc4b2cebaf6604f4aa3db750ccb0ee244435bfc9f3423405c92ff4efcb85678,PodSandboxId:4489cd98ec70508828593f2a3e458003ebbcf668c1dbe68d54d788624c7c625b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760879137974870644,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71bb7d57dca0d489a37a8b22b2830f1b,}
,Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c17deb73ba4849d3a92a56b2679a6e4acd0cc221356c77c3b99be352245a13f,PodSandboxId:a5c64ab5021e6ace5a214fb83abccd3347a369f3a6949e3745c000002edab93f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760879137736172009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9t46,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 758a5741-a495-4a52-a8ae-719bdb827876,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fdd6ec490453afde767a1798ad245ffe39bcf071d0ec0ab2e89807b2c967ae1,PodSandboxId:3251d2ccbc8b16c3dd841a518e0a3421d2f133715d287c8b60c77bcc7331aab7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760879055142662374,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-969331,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: f23032c31e306cec4ba2533075186454,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7db61d8f-e5c2-4b61-8512-8923c6adb517 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.878295924Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c5d9f40-ebc8-43aa-a961-bf29b9b982d1 name=/runtime.v1.RuntimeService/Version
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.878614905Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c5d9f40-ebc8-43aa-a961-bf29b9b982d1 name=/runtime.v1.RuntimeService/Version
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.880132682Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe6b7cd3-02f4-4f77-b84a-44135bf24f75 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.880932466Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760879480880876541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe6b7cd3-02f4-4f77-b84a-44135bf24f75 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.881540092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=42a660be-4422-4997-af62-a3ba3d1565df name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.881715575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=42a660be-4422-4997-af62-a3ba3d1565df name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 13:11:20 pause-969331 crio[3465]: time="2025-10-19 13:11:20.882236321Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0742d791fd1b94132b804df7328815e9094a0e41c8c31aafc8d4a615f004538,PodSandboxId:6507d2431c621b4aaa5e71e690121c06a801d1663aaf9a3578861472db8480a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760879238691328919,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9t46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 758a5741-a495-4a52-a8ae-719bdb827876,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fbc9bd04490b95cfcd6a361cd29ee1206ab1ba0e61a4f89dbb1328a8c39bf4,PodSandboxId:d2ae110fd76301b33644d8e70b876b0af97de3765b15148deed85521a7b1a0ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760879238500236299,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mz52t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4f2acb3-4b08-47de-9c4a-13a1702fdc26,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5a9e1a9ac3f7b614d51f28ce6a2fa6b67acff73bf8f5720f4be54b9093e5ad,PodSandboxId:9818c5b3513acdcfbf98b777deefb5b17ac9b132e12518a39f3820660326071b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760879235041980738,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60e1fe3cf1691827338ea468bb9d85a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e657f7b629ca5d30e4c69e4495631626e678877985c8653e8ea927771ca751bc,PodSandboxId:981d189f901228d3467c1005fd97c35f79b257c3f5edcb6f1b29856576ee7499,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760879234832722828,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71bb7d57dca0d489a37a8b22b2830f1b,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e27df0f866c7f3b7961d960e6d2263a8501f354ffa69bb136c33db5675574e,PodSandboxId:0c1d6b90e830a24285d6d9404c9dd1732f98b07525299ad007a13d621365e3ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760879234809275716,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe28a219ba25e236fd3e2cb4bcb9abbd,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deda0604a3881aa38c018190951670acd23bad65ab7094a72efb375d7e639be1,PodSandboxId:7a46cc6ba8c3430520f329079930fad973530dc8fe3214a2546779d5cee1b6be,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367c
c9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760879138817949847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mz52t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4f2acb3-4b08-47de-9c4a-13a1702fdc26,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f49c8da32752ce8e7e7393d89714ed081e91555e7802492d22a65660214b818,PodSandboxId:e0d80d7d1c33eff8e52243f7682223b005a82e0e5938a5d63e7a8a1715b0785f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760879138107763940,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe28a219ba25e236fd3e2cb4bcb9abbd,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c39b4c3bb62fe2128157d993c07111b9ae8927d89c6084d00a898bd3337ea481,PodSandboxId:b663e6957915a911bec06e79d717fe630ca628aedd04df73c81449c2e83089cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760879137965609918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60e1fe3cf1691827338ea468bb9d85a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\
"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc4b2cebaf6604f4aa3db750ccb0ee244435bfc9f3423405c92ff4efcb85678,PodSandboxId:4489cd98ec70508828593f2a3e458003ebbcf668c1dbe68d54d788624c7c625b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760879137974870644,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-969331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71bb7d57dca0d489a37a8b22b2830f1b,}
,Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c17deb73ba4849d3a92a56b2679a6e4acd0cc221356c77c3b99be352245a13f,PodSandboxId:a5c64ab5021e6ace5a214fb83abccd3347a369f3a6949e3745c000002edab93f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760879137736172009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9t46,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 758a5741-a495-4a52-a8ae-719bdb827876,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fdd6ec490453afde767a1798ad245ffe39bcf071d0ec0ab2e89807b2c967ae1,PodSandboxId:3251d2ccbc8b16c3dd841a518e0a3421d2f133715d287c8b60c77bcc7331aab7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760879055142662374,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-969331,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: f23032c31e306cec4ba2533075186454,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=42a660be-4422-4997-af62-a3ba3d1565df name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d0742d791fd1b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 minutes ago       Running             kube-proxy                2                   6507d2431c621       kube-proxy-h9t46
	80fbc9bd04490       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   4 minutes ago       Running             coredns                   2                   d2ae110fd7630       coredns-66bc5c9577-mz52t
	ee5a9e1a9ac3f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   4 minutes ago       Running             kube-controller-manager   2                   9818c5b3513ac       kube-controller-manager-pause-969331
	e657f7b629ca5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   4 minutes ago       Running             kube-apiserver            2                   981d189f90122       kube-apiserver-pause-969331
	20e27df0f866c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   4 minutes ago       Running             etcd                      2                   0c1d6b90e830a       etcd-pause-969331
	deda0604a3881       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   5 minutes ago       Exited              coredns                   1                   7a46cc6ba8c34       coredns-66bc5c9577-mz52t
	8f49c8da32752       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   5 minutes ago       Exited              etcd                      1                   e0d80d7d1c33e       etcd-pause-969331
	2fc4b2cebaf66       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   5 minutes ago       Exited              kube-apiserver            1                   4489cd98ec705       kube-apiserver-pause-969331
	c39b4c3bb62fe       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   5 minutes ago       Exited              kube-controller-manager   1                   b663e6957915a       kube-controller-manager-pause-969331
	1c17deb73ba48       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   5 minutes ago       Exited              kube-proxy                1                   a5c64ab5021e6       kube-proxy-h9t46
	5fdd6ec490453       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 minutes ago       Exited              kube-scheduler            0                   3251d2ccbc8b1       kube-scheduler-pause-969331
	
	
	==> coredns [80fbc9bd04490b95cfcd6a361cd29ee1206ab1ba0e61a4f89dbb1328a8c39bf4] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56301 - 24064 "HINFO IN 5564666658577079094.7788909582862490168. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034223537s
	
	
	==> coredns [deda0604a3881aa38c018190951670acd23bad65ab7094a72efb375d7e639be1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:36400 - 47778 "HINFO IN 4235617921756824402.3354224864465169667. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032249972s
	
	
	==> describe nodes <==
	Name:               pause-969331
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-969331
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=pause-969331
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T13_04_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 13:04:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-969331
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 13:11:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 13:07:17 +0000   Sun, 19 Oct 2025 13:04:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 13:07:17 +0000   Sun, 19 Oct 2025 13:04:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 13:07:17 +0000   Sun, 19 Oct 2025 13:04:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 13:07:17 +0000   Sun, 19 Oct 2025 13:04:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.162
	  Hostname:    pause-969331
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 ebde1d9ef3ae4179a4bbe35664a1f9e0
	  System UUID:                ebde1d9e-f3ae-4179-a4bb-e35664a1f9e0
	  Boot ID:                    9f4c1807-4fac-48b4-90ea-1d2a8b91164e
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-mz52t                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     6m54s
	  kube-system                 etcd-pause-969331                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         7m
	  kube-system                 kube-apiserver-pause-969331             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m
	  kube-system                 kube-controller-manager-pause-969331    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m
	  kube-system                 kube-proxy-h9t46                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m55s
	  kube-system                 kube-scheduler-pause-969331             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m53s                kube-proxy       
	  Normal  Starting                 4m2s                 kube-proxy       
	  Normal  Starting                 7m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m7s (x8 over 7m7s)  kubelet          Node pause-969331 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m7s (x8 over 7m7s)  kubelet          Node pause-969331 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m7s (x7 over 7m7s)  kubelet          Node pause-969331 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    7m                   kubelet          Node pause-969331 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m                   kubelet          Node pause-969331 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     7m                   kubelet          Node pause-969331 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m                   kubelet          Node pause-969331 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  7m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m56s                node-controller  Node pause-969331 event: Registered Node pause-969331 in Controller
	  Normal  Starting                 4m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m7s (x8 over 4m7s)  kubelet          Node pause-969331 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x8 over 4m7s)  kubelet          Node pause-969331 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x7 over 4m7s)  kubelet          Node pause-969331 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node pause-969331 event: Registered Node pause-969331 in Controller
	
	
	==> dmesg <==
	[Oct19 13:03] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000039] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002526] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[Oct19 13:04] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.103891] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.105438] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.103211] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.137238] kauditd_printk_skb: 171 callbacks suppressed
	[  +6.162734] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.760422] kauditd_printk_skb: 222 callbacks suppressed
	[Oct19 13:05] kauditd_printk_skb: 38 callbacks suppressed
	[Oct19 13:07] kauditd_printk_skb: 276 callbacks suppressed
	[  +1.398285] kauditd_printk_skb: 224 callbacks suppressed
	[  +1.469333] kauditd_printk_skb: 43 callbacks suppressed
	[Oct19 13:08] hrtimer: interrupt took 2432413 ns
	
	
	==> etcd [20e27df0f866c7f3b7961d960e6d2263a8501f354ffa69bb136c33db5675574e] <==
	{"level":"warn","ts":"2025-10-19T13:07:16.503869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:07:16.515323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:07:16.519848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:07:16.528093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:07:16.541542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:07:16.549345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:07:16.559892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:07:16.570480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:07:16.587703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:07:16.591294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:07:16.605840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:07:16.615547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:07:16.642199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:07:16.659270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:07:16.669058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:07:16.692905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:07:16.743965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:09:36.467469Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.859952ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T13:09:36.467556Z","caller":"traceutil/trace.go:172","msg":"trace[593687468] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:589; }","duration":"141.975849ms","start":"2025-10-19T13:09:36.325566Z","end":"2025-10-19T13:09:36.467542Z","steps":["trace[593687468] 'agreement among raft nodes before linearized reading'  (duration: 22.915557ms)","trace[593687468] 'range keys from in-memory index tree'  (duration: 118.918652ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T13:09:36.468224Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.202034ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13423147999485234331 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-scheduler-pause-969331.186fe65a25aa0ffe\" mod_revision:586 > success:<request_put:<key:\"/registry/events/kube-system/kube-scheduler-pause-969331.186fe65a25aa0ffe\" value_size:757 lease:4199775962630458521 >> failure:<request_range:<key:\"/registry/events/kube-system/kube-scheduler-pause-969331.186fe65a25aa0ffe\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-19T13:09:36.468603Z","caller":"traceutil/trace.go:172","msg":"trace[877581099] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"234.029084ms","start":"2025-10-19T13:09:36.234422Z","end":"2025-10-19T13:09:36.468451Z","steps":["trace[877581099] 'process raft request'  (duration: 114.128792ms)","trace[877581099] 'compare'  (duration: 118.811103ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T13:09:54.881870Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"255.294556ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13423147999485234489 > lease_revoke:<id:3a4899fc94d00cb1>","response":"size:28"}
	{"level":"warn","ts":"2025-10-19T13:10:52.897339Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.629881ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13423147999485234980 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-n5tdh6gdrrc4gwyn6x5rwa7wny\" mod_revision:617 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-n5tdh6gdrrc4gwyn6x5rwa7wny\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-n5tdh6gdrrc4gwyn6x5rwa7wny\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-19T13:10:52.897427Z","caller":"traceutil/trace.go:172","msg":"trace[1791669190] transaction","detail":"{read_only:false; response_revision:623; number_of_response:1; }","duration":"256.005297ms","start":"2025-10-19T13:10:52.641409Z","end":"2025-10-19T13:10:52.897414Z","steps":["trace[1791669190] 'process raft request'  (duration: 132.138277ms)","trace[1791669190] 'compare'  (duration: 123.103032ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-19T13:11:19.290724Z","caller":"traceutil/trace.go:172","msg":"trace[441867551] transaction","detail":"{read_only:false; response_revision:634; number_of_response:1; }","duration":"125.47553ms","start":"2025-10-19T13:11:19.165234Z","end":"2025-10-19T13:11:19.290709Z","steps":["trace[441867551] 'process raft request'  (duration: 59.90758ms)","trace[441867551] 'compare'  (duration: 65.45561ms)"],"step_count":2}
	
	
	==> etcd [8f49c8da32752ce8e7e7393d89714ed081e91555e7802492d22a65660214b818] <==
	{"level":"info","ts":"2025-10-19T13:05:39.469838Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-19T13:05:39.469857Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2025-10-19T13:05:39.472398Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-10-19T13:05:39.472527Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-19T13:05:39.482272Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-19T13:05:39.503703Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.162:2379"}
	{"level":"info","ts":"2025-10-19T13:05:39.507333Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-19T13:05:40.102077Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-19T13:05:40.102213Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-969331","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.162:2380"],"advertise-client-urls":["https://192.168.72.162:2379"]}
	{"level":"error","ts":"2025-10-19T13:05:40.102297Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T13:05:40.105988Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T13:05:40.111247Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T13:05:40.111899Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"5735a4f985d5ba48","current-leader-member-id":"5735a4f985d5ba48"}
	{"level":"info","ts":"2025-10-19T13:05:40.112093Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-19T13:05:40.115038Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-19T13:05:40.115374Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.162:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T13:05:40.116211Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.162:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T13:05:40.116241Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.162:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-19T13:05:40.115584Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T13:05:40.116261Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T13:05:40.116271Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T13:05:40.121276Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.72.162:2380"}
	{"level":"error","ts":"2025-10-19T13:05:40.121335Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.162:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T13:05:40.121362Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.72.162:2380"}
	{"level":"info","ts":"2025-10-19T13:05:40.121371Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-969331","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.162:2380"],"advertise-client-urls":["https://192.168.72.162:2379"]}
	
	
	==> kernel <==
	 13:11:21 up 7 min,  0 users,  load average: 0.03, 0.22, 0.14
	Linux pause-969331 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [2fc4b2cebaf6604f4aa3db750ccb0ee244435bfc9f3423405c92ff4efcb85678] <==
	W1019 13:05:40.192423       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:40.192483       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1019 13:05:40.192559       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1019 13:05:40.216925       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 13:05:40.231724       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1019 13:05:40.231904       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1019 13:05:40.232399       1 instance.go:239] Using reconciler: lease
	W1019 13:05:40.235978       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:40.236811       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:41.193283       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:41.193385       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:41.237559       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:42.662514       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:42.790000       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:42.998649       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:45.189899       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:45.722112       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:45.995485       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:49.801237       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:50.318211       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:50.712312       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:55.328461       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:56.190492       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:56.827428       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1019 13:06:00.233419       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [e657f7b629ca5d30e4c69e4495631626e678877985c8653e8ea927771ca751bc] <==
	I1019 13:07:17.569696       1 aggregator.go:171] initial CRD sync complete...
	I1019 13:07:17.569765       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 13:07:17.569847       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 13:07:17.569878       1 cache.go:39] Caches are synced for autoregister controller
	I1019 13:07:17.586965       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 13:07:17.587052       1 policy_source.go:240] refreshing policies
	I1019 13:07:17.619622       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 13:07:17.619853       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1019 13:07:17.619930       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 13:07:17.621904       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1019 13:07:17.622150       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 13:07:17.622182       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 13:07:17.627553       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 13:07:17.629619       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1019 13:07:17.632959       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 13:07:17.650091       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 13:07:18.245313       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 13:07:18.424515       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 13:07:19.457434       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 13:07:19.495614       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 13:07:19.537239       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 13:07:19.546108       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 13:07:20.921380       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 13:07:21.201628       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 13:07:21.311131       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c39b4c3bb62fe2128157d993c07111b9ae8927d89c6084d00a898bd3337ea481] <==
	
	
	==> kube-controller-manager [ee5a9e1a9ac3f7b614d51f28ce6a2fa6b67acff73bf8f5720f4be54b9093e5ad] <==
	I1019 13:07:20.910981       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 13:07:20.914867       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 13:07:20.915411       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:07:20.917207       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 13:07:20.918343       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 13:07:20.921134       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 13:07:20.924423       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 13:07:20.927221       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 13:07:20.943847       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:07:20.947301       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 13:07:20.947425       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 13:07:20.947477       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 13:07:20.948592       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 13:07:20.948942       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 13:07:20.949012       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 13:07:20.949035       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 13:07:20.949052       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 13:07:20.949058       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 13:07:20.951405       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 13:07:20.956661       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:07:20.962993       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:07:20.963004       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 13:07:20.963010       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 13:07:20.966080       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 13:07:20.971464       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [1c17deb73ba4849d3a92a56b2679a6e4acd0cc221356c77c3b99be352245a13f] <==
	I1019 13:05:38.426440       1 server_linux.go:53] "Using iptables proxy"
	I1019 13:05:39.262619       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	
	
	==> kube-proxy [d0742d791fd1b94132b804df7328815e9094a0e41c8c31aafc8d4a615f004538] <==
	I1019 13:07:19.011767       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 13:07:19.113319       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 13:07:19.113347       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.162"]
	E1019 13:07:19.113400       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 13:07:19.158012       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1019 13:07:19.158079       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1019 13:07:19.158104       1 server_linux.go:132] "Using iptables Proxier"
	I1019 13:07:19.168879       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 13:07:19.169178       1 server.go:527] "Version info" version="v1.34.1"
	I1019 13:07:19.169215       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:07:19.172631       1 config.go:200] "Starting service config controller"
	I1019 13:07:19.174223       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 13:07:19.173917       1 config.go:106] "Starting endpoint slice config controller"
	I1019 13:07:19.174268       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 13:07:19.173932       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 13:07:19.174298       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 13:07:19.178864       1 config.go:309] "Starting node config controller"
	I1019 13:07:19.178899       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 13:07:19.178906       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 13:07:19.274619       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 13:07:19.274669       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 13:07:19.274704       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [5fdd6ec490453afde767a1798ad245ffe39bcf071d0ec0ab2e89807b2c967ae1] <==
	E1019 13:04:18.172396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 13:04:18.172482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 13:04:18.172552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 13:04:18.172624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 13:04:18.172696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 13:04:18.172719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 13:04:18.985867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 13:04:19.051860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 13:04:19.083762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 13:04:19.090358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 13:04:19.099759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 13:04:19.129973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 13:04:19.138537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 13:04:19.176009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 13:04:19.283829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 13:04:19.306839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 13:04:19.309417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 13:04:19.486465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1019 13:04:21.352815       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:05:30.312696       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1019 13:05:30.324965       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1019 13:05:30.325079       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:05:30.327909       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1019 13:05:30.327920       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1019 13:05:30.328202       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 19 13:10:44 pause-969331 kubelet[4051]: E1019 13:10:44.344376    4051 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760879444343240855  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 19 13:10:47 pause-969331 kubelet[4051]: I1019 13:10:47.178465    4051 scope.go:117] "RemoveContainer" containerID="5fdd6ec490453afde767a1798ad245ffe39bcf071d0ec0ab2e89807b2c967ae1"
	Oct 19 13:10:47 pause-969331 kubelet[4051]: E1019 13:10:47.195597    4051 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-scheduler_kube-scheduler-pause-969331_kube-system_f23032c31e306cec4ba2533075186454_1\" is already in use by 00ef14e45f14654b86b68834d35a1132024a66999d577e26ce255cd138c9cd2f. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="cac32ba2199bce17bb4acea4cbf1d1ceb3fe81bfe0308bc84af11063c3dc12f7"
	Oct 19 13:10:47 pause-969331 kubelet[4051]: E1019 13:10:47.195987    4051 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-scheduler start failed in pod kube-scheduler-pause-969331_kube-system(f23032c31e306cec4ba2533075186454): CreateContainerError: the container name \"k8s_kube-scheduler_kube-scheduler-pause-969331_kube-system_f23032c31e306cec4ba2533075186454_1\" is already in use by 00ef14e45f14654b86b68834d35a1132024a66999d577e26ce255cd138c9cd2f. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Oct 19 13:10:47 pause-969331 kubelet[4051]: E1019 13:10:47.196178    4051 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"the container name \\\"k8s_kube-scheduler_kube-scheduler-pause-969331_kube-system_f23032c31e306cec4ba2533075186454_1\\\" is already in use by 00ef14e45f14654b86b68834d35a1132024a66999d577e26ce255cd138c9cd2f. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-scheduler-pause-969331" podUID="f23032c31e306cec4ba2533075186454"
	Oct 19 13:10:54 pause-969331 kubelet[4051]: E1019 13:10:54.347895    4051 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760879454347049580  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 19 13:10:54 pause-969331 kubelet[4051]: E1019 13:10:54.347934    4051 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760879454347049580  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 19 13:11:01 pause-969331 kubelet[4051]: I1019 13:11:01.178858    4051 scope.go:117] "RemoveContainer" containerID="5fdd6ec490453afde767a1798ad245ffe39bcf071d0ec0ab2e89807b2c967ae1"
	Oct 19 13:11:01 pause-969331 kubelet[4051]: E1019 13:11:01.190560    4051 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-scheduler_kube-scheduler-pause-969331_kube-system_f23032c31e306cec4ba2533075186454_1\" is already in use by 00ef14e45f14654b86b68834d35a1132024a66999d577e26ce255cd138c9cd2f. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="cac32ba2199bce17bb4acea4cbf1d1ceb3fe81bfe0308bc84af11063c3dc12f7"
	Oct 19 13:11:01 pause-969331 kubelet[4051]: E1019 13:11:01.191063    4051 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-scheduler start failed in pod kube-scheduler-pause-969331_kube-system(f23032c31e306cec4ba2533075186454): CreateContainerError: the container name \"k8s_kube-scheduler_kube-scheduler-pause-969331_kube-system_f23032c31e306cec4ba2533075186454_1\" is already in use by 00ef14e45f14654b86b68834d35a1132024a66999d577e26ce255cd138c9cd2f. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Oct 19 13:11:01 pause-969331 kubelet[4051]: E1019 13:11:01.191124    4051 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"the container name \\\"k8s_kube-scheduler_kube-scheduler-pause-969331_kube-system_f23032c31e306cec4ba2533075186454_1\\\" is already in use by 00ef14e45f14654b86b68834d35a1132024a66999d577e26ce255cd138c9cd2f. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-scheduler-pause-969331" podUID="f23032c31e306cec4ba2533075186454"
	Oct 19 13:11:04 pause-969331 kubelet[4051]: E1019 13:11:04.350002    4051 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760879464349604978  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 19 13:11:04 pause-969331 kubelet[4051]: E1019 13:11:04.350040    4051 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760879464349604978  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 19 13:11:13 pause-969331 kubelet[4051]: I1019 13:11:13.178388    4051 scope.go:117] "RemoveContainer" containerID="5fdd6ec490453afde767a1798ad245ffe39bcf071d0ec0ab2e89807b2c967ae1"
	Oct 19 13:11:13 pause-969331 kubelet[4051]: E1019 13:11:13.193879    4051 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-scheduler_kube-scheduler-pause-969331_kube-system_f23032c31e306cec4ba2533075186454_1\" is already in use by 00ef14e45f14654b86b68834d35a1132024a66999d577e26ce255cd138c9cd2f. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="cac32ba2199bce17bb4acea4cbf1d1ceb3fe81bfe0308bc84af11063c3dc12f7"
	Oct 19 13:11:13 pause-969331 kubelet[4051]: E1019 13:11:13.193946    4051 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-scheduler start failed in pod kube-scheduler-pause-969331_kube-system(f23032c31e306cec4ba2533075186454): CreateContainerError: the container name \"k8s_kube-scheduler_kube-scheduler-pause-969331_kube-system_f23032c31e306cec4ba2533075186454_1\" is already in use by 00ef14e45f14654b86b68834d35a1132024a66999d577e26ce255cd138c9cd2f. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Oct 19 13:11:13 pause-969331 kubelet[4051]: E1019 13:11:13.193974    4051 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"the container name \\\"k8s_kube-scheduler_kube-scheduler-pause-969331_kube-system_f23032c31e306cec4ba2533075186454_1\\\" is already in use by 00ef14e45f14654b86b68834d35a1132024a66999d577e26ce255cd138c9cd2f. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-scheduler-pause-969331" podUID="f23032c31e306cec4ba2533075186454"
	Oct 19 13:11:14 pause-969331 kubelet[4051]: E1019 13:11:14.275633    4051 manager.go:1116] Failed to create existing container: /kubepods/burstable/podd4f2acb3-4b08-47de-9c4a-13a1702fdc26/crio-7a46cc6ba8c3430520f329079930fad973530dc8fe3214a2546779d5cee1b6be: Error finding container 7a46cc6ba8c3430520f329079930fad973530dc8fe3214a2546779d5cee1b6be: Status 404 returned error can't find the container with id 7a46cc6ba8c3430520f329079930fad973530dc8fe3214a2546779d5cee1b6be
	Oct 19 13:11:14 pause-969331 kubelet[4051]: E1019 13:11:14.275932    4051 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod71bb7d57dca0d489a37a8b22b2830f1b/crio-4489cd98ec70508828593f2a3e458003ebbcf668c1dbe68d54d788624c7c625b: Error finding container 4489cd98ec70508828593f2a3e458003ebbcf668c1dbe68d54d788624c7c625b: Status 404 returned error can't find the container with id 4489cd98ec70508828593f2a3e458003ebbcf668c1dbe68d54d788624c7c625b
	Oct 19 13:11:14 pause-969331 kubelet[4051]: E1019 13:11:14.276274    4051 manager.go:1116] Failed to create existing container: /kubepods/burstable/podf23032c31e306cec4ba2533075186454/crio-3251d2ccbc8b16c3dd841a518e0a3421d2f133715d287c8b60c77bcc7331aab7: Error finding container 3251d2ccbc8b16c3dd841a518e0a3421d2f133715d287c8b60c77bcc7331aab7: Status 404 returned error can't find the container with id 3251d2ccbc8b16c3dd841a518e0a3421d2f133715d287c8b60c77bcc7331aab7
	Oct 19 13:11:14 pause-969331 kubelet[4051]: E1019 13:11:14.276706    4051 manager.go:1116] Failed to create existing container: /kubepods/burstable/pode60e1fe3cf1691827338ea468bb9d85a/crio-b663e6957915a911bec06e79d717fe630ca628aedd04df73c81449c2e83089cb: Error finding container b663e6957915a911bec06e79d717fe630ca628aedd04df73c81449c2e83089cb: Status 404 returned error can't find the container with id b663e6957915a911bec06e79d717fe630ca628aedd04df73c81449c2e83089cb
	Oct 19 13:11:14 pause-969331 kubelet[4051]: E1019 13:11:14.277060    4051 manager.go:1116] Failed to create existing container: /kubepods/burstable/podfe28a219ba25e236fd3e2cb4bcb9abbd/crio-e0d80d7d1c33eff8e52243f7682223b005a82e0e5938a5d63e7a8a1715b0785f: Error finding container e0d80d7d1c33eff8e52243f7682223b005a82e0e5938a5d63e7a8a1715b0785f: Status 404 returned error can't find the container with id e0d80d7d1c33eff8e52243f7682223b005a82e0e5938a5d63e7a8a1715b0785f
	Oct 19 13:11:14 pause-969331 kubelet[4051]: E1019 13:11:14.277424    4051 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod758a5741-a495-4a52-a8ae-719bdb827876/crio-a5c64ab5021e6ace5a214fb83abccd3347a369f3a6949e3745c000002edab93f: Error finding container a5c64ab5021e6ace5a214fb83abccd3347a369f3a6949e3745c000002edab93f: Status 404 returned error can't find the container with id a5c64ab5021e6ace5a214fb83abccd3347a369f3a6949e3745c000002edab93f
	Oct 19 13:11:14 pause-969331 kubelet[4051]: E1019 13:11:14.352066    4051 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760879474351214521  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 19 13:11:14 pause-969331 kubelet[4051]: E1019 13:11:14.352093    4051 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760879474351214521  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-969331 -n pause-969331
helpers_test.go:269: (dbg) Run:  kubectl --context pause-969331 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (376.56s)

                                                
                                    

Test pass (281/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 25.7
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.14
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 12.26
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.15
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.66
22 TestOffline 86.09
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 196.7
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 10.5
35 TestAddons/parallel/Registry 20.27
36 TestAddons/parallel/RegistryCreds 1.19
38 TestAddons/parallel/InspektorGadget 5.31
39 TestAddons/parallel/MetricsServer 7.24
41 TestAddons/parallel/CSI 63.44
42 TestAddons/parallel/Headlamp 20.89
43 TestAddons/parallel/CloudSpanner 6.62
44 TestAddons/parallel/LocalPath 14.09
45 TestAddons/parallel/NvidiaDevicePlugin 6.89
46 TestAddons/parallel/Yakd 10.92
48 TestAddons/StoppedEnableDisable 86.74
49 TestCertOptions 49.08
50 TestCertExpiration 467.3
52 TestForceSystemdFlag 67.24
53 TestForceSystemdEnv 68.11
55 TestKVMDriverInstallOrUpdate 1.17
59 TestErrorSpam/setup 37.71
60 TestErrorSpam/start 0.34
61 TestErrorSpam/status 0.78
62 TestErrorSpam/pause 1.6
63 TestErrorSpam/unpause 1.77
64 TestErrorSpam/stop 5.38
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 80.43
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 40.07
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.36
76 TestFunctional/serial/CacheCmd/cache/add_local 2.24
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.53
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
84 TestFunctional/serial/ExtraConfig 31.35
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.37
87 TestFunctional/serial/LogsFileCmd 1.3
88 TestFunctional/serial/InvalidService 4.58
90 TestFunctional/parallel/ConfigCmd 0.36
91 TestFunctional/parallel/DashboardCmd 16.21
92 TestFunctional/parallel/DryRun 0.26
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 0.99
98 TestFunctional/parallel/ServiceCmdConnect 22.52
99 TestFunctional/parallel/AddonsCmd 0.15
100 TestFunctional/parallel/PersistentVolumeClaim 40.71
102 TestFunctional/parallel/SSHCmd 0.43
103 TestFunctional/parallel/CpCmd 1.44
104 TestFunctional/parallel/MySQL 27.03
105 TestFunctional/parallel/FileSync 0.21
106 TestFunctional/parallel/CertSync 1.28
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
114 TestFunctional/parallel/License 0.51
115 TestFunctional/parallel/ServiceCmd/DeployApp 9.2
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
117 TestFunctional/parallel/ProfileCmd/profile_list 0.37
118 TestFunctional/parallel/MountCmd/any-port 9.39
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
120 TestFunctional/parallel/Version/short 0.06
121 TestFunctional/parallel/Version/components 0.6
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.63
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
126 TestFunctional/parallel/ImageCommands/ImageBuild 8.4
127 TestFunctional/parallel/ImageCommands/Setup 1.94
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.4
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.91
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.78
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.88
137 TestFunctional/parallel/ServiceCmd/List 0.54
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.99
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
141 TestFunctional/parallel/MountCmd/specific-port 1.62
142 TestFunctional/parallel/ServiceCmd/Format 0.32
143 TestFunctional/parallel/ServiceCmd/URL 0.34
144 TestFunctional/parallel/MountCmd/VerifyCleanup 1.34
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 219.42
162 TestMultiControlPlane/serial/DeployApp 8.4
163 TestMultiControlPlane/serial/PingHostFromPods 1.21
164 TestMultiControlPlane/serial/AddWorkerNode 43.96
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
167 TestMultiControlPlane/serial/CopyFile 13.14
168 TestMultiControlPlane/serial/StopSecondaryNode 90.48
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
170 TestMultiControlPlane/serial/RestartSecondaryNode 34.9
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.09
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 301.65
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.93
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.62
175 TestMultiControlPlane/serial/StopCluster 241.31
176 TestMultiControlPlane/serial/RestartCluster 103.05
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.63
178 TestMultiControlPlane/serial/AddSecondaryNode 89.91
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
183 TestJSONOutput/start/Command 78.57
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.71
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.64
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 6.94
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 79.56
215 TestMountStart/serial/StartWithMountFirst 20.21
216 TestMountStart/serial/VerifyMountFirst 0.36
217 TestMountStart/serial/StartWithMountSecond 24.01
218 TestMountStart/serial/VerifyMountSecond 0.36
219 TestMountStart/serial/DeleteFirst 0.69
220 TestMountStart/serial/VerifyMountPostDelete 0.36
221 TestMountStart/serial/Stop 1.19
222 TestMountStart/serial/RestartStopped 19.69
223 TestMountStart/serial/VerifyMountPostStop 0.36
226 TestMultiNode/serial/FreshStart2Nodes 98.24
227 TestMultiNode/serial/DeployApp2Nodes 6.15
228 TestMultiNode/serial/PingHostFrom2Pods 0.77
229 TestMultiNode/serial/AddNode 43.47
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.58
232 TestMultiNode/serial/CopyFile 7.16
233 TestMultiNode/serial/StopNode 2.69
234 TestMultiNode/serial/StartAfterStop 36.67
235 TestMultiNode/serial/RestartKeepsNodes 292.35
236 TestMultiNode/serial/DeleteNode 2.81
237 TestMultiNode/serial/StopMultiNode 148.78
238 TestMultiNode/serial/RestartMultiNode 117.97
239 TestMultiNode/serial/ValidateNameConflict 40.67
246 TestScheduledStopUnix 112.61
250 TestRunningBinaryUpgrade 146
252 TestKubernetesUpgrade 201.54
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
259 TestNoKubernetes/serial/StartWithK8s 85.72
264 TestNetworkPlugins/group/false 3.32
268 TestNoKubernetes/serial/StartWithStopK8s 31.66
269 TestStoppedBinaryUpgrade/Setup 9.47
270 TestStoppedBinaryUpgrade/Upgrade 115.09
271 TestNoKubernetes/serial/Start 45.16
280 TestPause/serial/Start 116.57
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
282 TestNoKubernetes/serial/ProfileList 8.65
283 TestNoKubernetes/serial/Stop 1.22
284 TestNoKubernetes/serial/StartNoArgs 57.17
285 TestStoppedBinaryUpgrade/MinikubeLogs 1.24
286 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
288 TestNetworkPlugins/group/auto/Start 89.34
289 TestNetworkPlugins/group/kindnet/Start 63.49
290 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
291 TestNetworkPlugins/group/auto/KubeletFlags 0.2
292 TestNetworkPlugins/group/auto/NetCatPod 11.24
293 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
294 TestNetworkPlugins/group/kindnet/NetCatPod 10.21
295 TestNetworkPlugins/group/kindnet/DNS 0.16
296 TestNetworkPlugins/group/kindnet/Localhost 0.15
297 TestNetworkPlugins/group/auto/DNS 0.19
298 TestNetworkPlugins/group/kindnet/HairPin 0.14
299 TestNetworkPlugins/group/auto/Localhost 0.17
300 TestNetworkPlugins/group/auto/HairPin 0.14
301 TestNetworkPlugins/group/calico/Start 67.22
302 TestNetworkPlugins/group/custom-flannel/Start 90.26
303 TestNetworkPlugins/group/calico/ControllerPod 6.01
304 TestNetworkPlugins/group/calico/KubeletFlags 0.22
305 TestNetworkPlugins/group/calico/NetCatPod 11.21
306 TestNetworkPlugins/group/calico/DNS 0.18
307 TestNetworkPlugins/group/calico/Localhost 0.15
308 TestNetworkPlugins/group/calico/HairPin 0.13
309 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
310 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.24
311 TestNetworkPlugins/group/custom-flannel/DNS 0.16
312 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
313 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
314 TestNetworkPlugins/group/enable-default-cni/Start 50.71
315 TestNetworkPlugins/group/flannel/Start 71.18
316 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
317 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
318 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
319 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
320 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
321 TestNetworkPlugins/group/bridge/Start 81.23
322 TestNetworkPlugins/group/flannel/ControllerPod 6.01
323 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
324 TestNetworkPlugins/group/flannel/NetCatPod 11.22
325 TestNetworkPlugins/group/flannel/DNS 0.14
326 TestNetworkPlugins/group/flannel/Localhost 0.12
327 TestNetworkPlugins/group/flannel/HairPin 0.12
329 TestStartStop/group/old-k8s-version/serial/FirstStart 96.59
331 TestStartStop/group/no-preload/serial/FirstStart 107.99
332 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
333 TestNetworkPlugins/group/bridge/NetCatPod 11.23
335 TestStartStop/group/embed-certs/serial/FirstStart 92.98
336 TestNetworkPlugins/group/bridge/DNS 0.16
337 TestNetworkPlugins/group/bridge/Localhost 0.13
338 TestNetworkPlugins/group/bridge/HairPin 0.15
340 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84
341 TestStartStop/group/old-k8s-version/serial/DeployApp 11.32
342 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.07
343 TestStartStop/group/old-k8s-version/serial/Stop 90.03
344 TestStartStop/group/no-preload/serial/DeployApp 10.28
345 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.93
346 TestStartStop/group/no-preload/serial/Stop 82.39
347 TestStartStop/group/embed-certs/serial/DeployApp 10.29
348 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.94
349 TestStartStop/group/embed-certs/serial/Stop 73.25
350 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.24
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
352 TestStartStop/group/default-k8s-diff-port/serial/Stop 88.11
353 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
354 TestStartStop/group/old-k8s-version/serial/SecondStart 45.92
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
356 TestStartStop/group/no-preload/serial/SecondStart 65.54
357 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
358 TestStartStop/group/embed-certs/serial/SecondStart 59.34
359 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 16.01
360 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
361 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.95
362 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
363 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.33
364 TestStartStop/group/old-k8s-version/serial/Pause 3.49
366 TestStartStop/group/newest-cni/serial/FirstStart 46.72
367 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 8.01
368 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.01
369 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.08
370 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
371 TestStartStop/group/embed-certs/serial/Pause 3.3
372 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
373 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
374 TestStartStop/group/no-preload/serial/Pause 3.55
375 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 7.01
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
377 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
378 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.8
379 TestStartStop/group/newest-cni/serial/DeployApp 0
380 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.98
381 TestStartStop/group/newest-cni/serial/Stop 10.75
382 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
383 TestStartStop/group/newest-cni/serial/SecondStart 33.52
384 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
385 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
387 TestStartStop/group/newest-cni/serial/Pause 3.83
x
+
TestDownloadOnly/v1.28.0/json-events (25.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-004671 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-004671 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (25.695671384s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (25.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1019 12:06:30.918187  148701 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1019 12:06:30.918338  148701 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-144655/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-004671
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-004671: exit status 85 (61.09641ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-004671 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-004671 │ jenkins │ v1.37.0 │ 19 Oct 25 12:06 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:06:05
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:06:05.265785  148713 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:06:05.266066  148713 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:06:05.266076  148713 out.go:374] Setting ErrFile to fd 2...
	I1019 12:06:05.266080  148713 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:06:05.266314  148713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-144655/.minikube/bin
	W1019 12:06:05.266458  148713 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21772-144655/.minikube/config/config.json: open /home/jenkins/minikube-integration/21772-144655/.minikube/config/config.json: no such file or directory
	I1019 12:06:05.266919  148713 out.go:368] Setting JSON to true
	I1019 12:06:05.268613  148713 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2899,"bootTime":1760872666,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:06:05.268711  148713 start.go:141] virtualization: kvm guest
	I1019 12:06:05.270561  148713 out.go:99] [download-only-004671] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:06:05.270672  148713 notify.go:220] Checking for updates...
	W1019 12:06:05.270674  148713 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21772-144655/.minikube/cache/preloaded-tarball: no such file or directory
	I1019 12:06:05.271863  148713 out.go:171] MINIKUBE_LOCATION=21772
	I1019 12:06:05.273579  148713 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:06:05.274664  148713 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21772-144655/kubeconfig
	I1019 12:06:05.275905  148713 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-144655/.minikube
	I1019 12:06:05.276917  148713 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1019 12:06:05.278779  148713 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1019 12:06:05.279016  148713 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:06:05.730495  148713 out.go:99] Using the kvm2 driver based on user configuration
	I1019 12:06:05.730526  148713 start.go:305] selected driver: kvm2
	I1019 12:06:05.730532  148713 start.go:925] validating driver "kvm2" against <nil>
	I1019 12:06:05.730901  148713 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:06:05.731377  148713 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21772-144655/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 12:06:05.745598  148713 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 12:06:05.745626  148713 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21772-144655/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 12:06:05.759033  148713 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 12:06:05.759074  148713 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 12:06:05.759695  148713 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1019 12:06:05.759843  148713 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 12:06:05.759866  148713 cni.go:84] Creating CNI manager for ""
	I1019 12:06:05.759912  148713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 12:06:05.759921  148713 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1019 12:06:05.759961  148713 start.go:349] cluster config:
	{Name:download-only-004671 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-004671 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:06:05.760114  148713 iso.go:125] acquiring lock: {Name:mk95990edcd162f08eff1d65580753d7d9806693 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:06:05.761733  148713 out.go:99] Downloading VM boot image ...
	I1019 12:06:05.761778  148713 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21772-144655/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1019 12:06:16.989144  148713 out.go:99] Starting "download-only-004671" primary control-plane node in "download-only-004671" cluster
	I1019 12:06:16.989180  148713 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 12:06:17.096492  148713 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1019 12:06:17.096533  148713 cache.go:58] Caching tarball of preloaded images
	I1019 12:06:17.097310  148713 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 12:06:17.098855  148713 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1019 12:06:17.098876  148713 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1019 12:06:17.212775  148713 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1019 12:06:17.212913  148713 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21772-144655/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1019 12:06:29.963706  148713 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1019 12:06:29.964051  148713 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/download-only-004671/config.json ...
	I1019 12:06:29.964082  148713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/download-only-004671/config.json: {Name:mk7d99330179b3ed657919a52a4a53fcdc967b90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:06:29.964244  148713 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 12:06:29.964453  148713 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21772-144655/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-004671 host does not exist
	  To start a cluster, run: "minikube start -p download-only-004671"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-004671
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (12.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-854705 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-854705 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (12.259069628s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (12.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1019 12:06:43.517323  148701 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1019 12:06:43.517423  148701 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-144655/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-854705
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-854705: exit status 85 (62.99459ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-004671 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-004671 │ jenkins │ v1.37.0 │ 19 Oct 25 12:06 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 19 Oct 25 12:06 UTC │ 19 Oct 25 12:06 UTC │
	│ delete  │ -p download-only-004671                                                                                                                                                                             │ download-only-004671 │ jenkins │ v1.37.0 │ 19 Oct 25 12:06 UTC │ 19 Oct 25 12:06 UTC │
	│ start   │ -o=json --download-only -p download-only-854705 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-854705 │ jenkins │ v1.37.0 │ 19 Oct 25 12:06 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:06:31
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:06:31.304012  148986 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:06:31.304295  148986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:06:31.304305  148986 out.go:374] Setting ErrFile to fd 2...
	I1019 12:06:31.304308  148986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:06:31.304508  148986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-144655/.minikube/bin
	I1019 12:06:31.304971  148986 out.go:368] Setting JSON to true
	I1019 12:06:31.305958  148986 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2925,"bootTime":1760872666,"procs":280,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:06:31.306061  148986 start.go:141] virtualization: kvm guest
	I1019 12:06:31.307833  148986 out.go:99] [download-only-854705] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:06:31.308038  148986 notify.go:220] Checking for updates...
	I1019 12:06:31.310837  148986 out.go:171] MINIKUBE_LOCATION=21772
	I1019 12:06:31.311955  148986 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:06:31.312989  148986 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21772-144655/kubeconfig
	I1019 12:06:31.314042  148986 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-144655/.minikube
	I1019 12:06:31.315114  148986 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1019 12:06:31.317025  148986 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1019 12:06:31.317350  148986 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:06:31.346485  148986 out.go:99] Using the kvm2 driver based on user configuration
	I1019 12:06:31.346513  148986 start.go:305] selected driver: kvm2
	I1019 12:06:31.346522  148986 start.go:925] validating driver "kvm2" against <nil>
	I1019 12:06:31.346822  148986 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:06:31.346886  148986 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21772-144655/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 12:06:31.359630  148986 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 12:06:31.359650  148986 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21772-144655/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 12:06:31.372179  148986 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 12:06:31.372216  148986 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 12:06:31.372702  148986 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1019 12:06:31.372839  148986 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 12:06:31.372861  148986 cni.go:84] Creating CNI manager for ""
	I1019 12:06:31.372905  148986 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 12:06:31.372915  148986 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1019 12:06:31.372957  148986 start.go:349] cluster config:
	{Name:download-only-854705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-854705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:06:31.373042  148986 iso.go:125] acquiring lock: {Name:mk95990edcd162f08eff1d65580753d7d9806693 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:06:31.374457  148986 out.go:99] Starting "download-only-854705" primary control-plane node in "download-only-854705" cluster
	I1019 12:06:31.374470  148986 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:06:31.480255  148986 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 12:06:31.480299  148986 cache.go:58] Caching tarball of preloaded images
	I1019 12:06:31.481039  148986 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:06:31.482461  148986 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1019 12:06:31.482474  148986 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1019 12:06:31.594113  148986 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1019 12:06:31.594158  148986 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21772-144655/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-854705 host does not exist
	  To start a cluster, run: "minikube start -p download-only-854705"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-854705
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I1019 12:06:44.116202  148701 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-477717 --alsologtostderr --binary-mirror http://127.0.0.1:39483 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-477717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-477717
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (86.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-686849 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-686849 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.026324996s)
helpers_test.go:175: Cleaning up "offline-crio-686849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-686849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-686849: (1.061534471s)
--- PASS: TestOffline (86.09s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-360741
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-360741: exit status 85 (56.730635ms)

                                                
                                                
-- stdout --
	* Profile "addons-360741" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-360741"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-360741
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-360741: exit status 85 (55.782982ms)

                                                
                                                
-- stdout --
	* Profile "addons-360741" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-360741"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (196.7s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-360741 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-360741 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m16.695558948s)
--- PASS: TestAddons/Setup (196.70s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-360741 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-360741 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.5s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-360741 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-360741 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7ddf4899-804d-4751-a068-633eef6d521f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7ddf4899-804d-4751-a068-633eef6d521f] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.00457094s
addons_test.go:694: (dbg) Run:  kubectl --context addons-360741 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-360741 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-360741 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.50s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.655233ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-w9nbt" [5a256bb4-be22-4253-b273-f54382dd90ea] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005291025s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-v2zn6" [5d518ee0-0b2c-4d81-8f53-add3c51066b3] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003969909s
addons_test.go:392: (dbg) Run:  kubectl --context addons-360741 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-360741 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-360741 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.500891877s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-360741 ip
2025/10/19 12:10:40 [DEBUG] GET http://192.168.39.35:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-360741 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.27s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (1.19s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 7.095173ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-360741
addons_test.go:332: (dbg) Run:  kubectl --context addons-360741 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-360741 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (1.19s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-fc69p" [14fd8ed2-00df-4440-ad56-9dd5d1b08268] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003957964s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-360741 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.24s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 8.026791ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-djqc2" [572b8d9c-4d84-47b5-8f49-7478a2d3fbbf] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00385048s
addons_test.go:463: (dbg) Run:  kubectl --context addons-360741 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-360741 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-360741 addons disable metrics-server --alsologtostderr -v=1: (1.146786241s)
--- PASS: TestAddons/parallel/MetricsServer (7.24s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.44s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1019 12:10:27.695854  148701 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1019 12:10:27.741420  148701 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1019 12:10:27.741454  148701 kapi.go:107] duration metric: took 45.612125ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 45.623227ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-360741 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-360741 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [b6afbcc5-240e-413d-acbb-6833837085bb] Pending
helpers_test.go:352: "task-pv-pod" [b6afbcc5-240e-413d-acbb-6833837085bb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [b6afbcc5-240e-413d-acbb-6833837085bb] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004139911s
addons_test.go:572: (dbg) Run:  kubectl --context addons-360741 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-360741 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-360741 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-360741 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-360741 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-360741 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-360741 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [d5388179-7030-4689-a3d8-093253ac2589] Pending
helpers_test.go:352: "task-pv-pod-restore" [d5388179-7030-4689-a3d8-093253ac2589] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [d5388179-7030-4689-a3d8-093253ac2589] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004955101s
addons_test.go:614: (dbg) Run:  kubectl --context addons-360741 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-360741 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-360741 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-360741 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-360741 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-360741 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.697600613s)
--- PASS: TestAddons/parallel/CSI (63.44s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-360741 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-sh4h7" [b3086178-89c4-4c14-86be-052a1ac35726] Pending
helpers_test.go:352: "headlamp-6945c6f4d-sh4h7" [b3086178-89c4-4c14-86be-052a1ac35726] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-sh4h7" [b3086178-89c4-4c14-86be-052a1ac35726] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-sh4h7" [b3086178-89c4-4c14-86be-052a1ac35726] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.003586994s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-360741 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-360741 addons disable headlamp --alsologtostderr -v=1: (6.013500641s)
--- PASS: TestAddons/parallel/Headlamp (20.89s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-fz85w" [bf13bcb4-2064-4371-a67c-5420f6cad748] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004044068s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-360741 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (14.09s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-360741 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-360741 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-360741 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [6c59220e-91ea-4d86-9658-c89f2c1dcde6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [6c59220e-91ea-4d86-9658-c89f2c1dcde6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [6c59220e-91ea-4d86-9658-c89f2c1dcde6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003941265s
addons_test.go:967: (dbg) Run:  kubectl --context addons-360741 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-360741 ssh "cat /opt/local-path-provisioner/pvc-2f20e001-4597-4197-a480-b51b1d034e34_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-360741 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-360741 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-360741 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (14.09s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.89s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-8xnsb" [2247795d-e86a-4366-af44-71e3643b8a20] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003897404s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-360741 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.89s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-tdbzr" [d4d82c67-4a8c-45d8-90ce-034dc9a8291a] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004866115s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-360741 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-360741 addons disable yakd --alsologtostderr -v=1: (5.91541647s)
--- PASS: TestAddons/parallel/Yakd (10.92s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (86.74s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-360741
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-360741: (1m26.469008197s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-360741
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-360741
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-360741
--- PASS: TestAddons/StoppedEnableDisable (86.74s)

                                                
                                    
x
+
TestCertOptions (49.08s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-842153 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1019 13:05:02.175325  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-842153 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (47.707952487s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-842153 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-842153 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-842153 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-842153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-842153
--- PASS: TestCertOptions (49.08s)

                                                
                                    
x
+
TestCertExpiration (467.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-426397 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-426397 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m3.026009082s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-426397 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1019 13:08:09.602962  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-426397 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m43.155776989s)
helpers_test.go:175: Cleaning up "cert-expiration-426397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-426397
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-426397: (1.112486802s)
--- PASS: TestCertExpiration (467.30s)

                                                
                                    
x
+
TestForceSystemdFlag (67.24s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-367081 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1019 13:04:45.253619  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-367081 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m6.102321259s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-367081 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-367081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-367081
--- PASS: TestForceSystemdFlag (67.24s)

                                                
                                    
x
+
TestForceSystemdEnv (68.11s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-773419 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-773419 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m7.110244506s)
helpers_test.go:175: Cleaning up "force-systemd-env-773419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-773419
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-773419: (1.002108547s)
--- PASS: TestForceSystemdEnv (68.11s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.17s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1019 13:00:41.992577  148701 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1019 13:00:41.992726  148701 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1397333325/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1019 13:00:42.032881  148701 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1397333325/001/docker-machine-driver-kvm2 version is 1.1.1
W1019 13:00:42.032953  148701 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1019 13:00:42.033300  148701 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1019 13:00:42.033376  148701 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1397333325/001/docker-machine-driver-kvm2
I1019 13:00:43.015209  148701 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1397333325/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1019 13:00:43.033087  148701 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1397333325/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (1.17s)

                                                
                                    
x
+
TestErrorSpam/setup (37.71s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-822445 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-822445 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1019 12:15:02.175591  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:15:02.185835  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:15:02.197965  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:15:02.219328  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:15:02.260719  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:15:02.342225  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:15:02.503782  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:15:02.825521  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:15:03.467572  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:15:04.749178  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:15:07.311440  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-822445 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-822445 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.708282253s)
--- PASS: TestErrorSpam/setup (37.71s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822445 --log_dir /tmp/nospam-822445 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822445 --log_dir /tmp/nospam-822445 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822445 --log_dir /tmp/nospam-822445 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822445 --log_dir /tmp/nospam-822445 status
E1019 12:15:12.433535  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822445 --log_dir /tmp/nospam-822445 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822445 --log_dir /tmp/nospam-822445 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822445 --log_dir /tmp/nospam-822445 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822445 --log_dir /tmp/nospam-822445 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822445 --log_dir /tmp/nospam-822445 pause
--- PASS: TestErrorSpam/pause (1.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822445 --log_dir /tmp/nospam-822445 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822445 --log_dir /tmp/nospam-822445 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822445 --log_dir /tmp/nospam-822445 unpause
--- PASS: TestErrorSpam/unpause (1.77s)

                                                
                                    
x
+
TestErrorSpam/stop (5.38s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822445 --log_dir /tmp/nospam-822445 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-822445 --log_dir /tmp/nospam-822445 stop: (2.058050346s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822445 --log_dir /tmp/nospam-822445 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-822445 --log_dir /tmp/nospam-822445 stop: (1.615729181s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822445 --log_dir /tmp/nospam-822445 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-822445 --log_dir /tmp/nospam-822445 stop: (1.710349504s)
--- PASS: TestErrorSpam/stop (5.38s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21772-144655/.minikube/files/etc/test/nested/copy/148701/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.43s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-789160 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1019 12:15:22.674883  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:15:43.156452  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:16:24.119445  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-789160 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m20.427275495s)
--- PASS: TestFunctional/serial/StartWithProxy (80.43s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.07s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1019 12:16:42.767888  148701 config.go:182] Loaded profile config "functional-789160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-789160 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-789160 --alsologtostderr -v=8: (40.065093253s)
functional_test.go:678: soft start took 40.065745544s for "functional-789160" cluster.
I1019 12:17:22.833338  148701 config.go:182] Loaded profile config "functional-789160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (40.07s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-789160 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-789160 cache add registry.k8s.io/pause:3.1: (1.057159416s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-789160 cache add registry.k8s.io/pause:3.3: (1.206552907s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-789160 cache add registry.k8s.io/pause:latest: (1.096672327s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-789160 /tmp/TestFunctionalserialCacheCmdcacheadd_local2353070609/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 cache add minikube-local-cache-test:functional-789160
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-789160 cache add minikube-local-cache-test:functional-789160: (1.938623539s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 cache delete minikube-local-cache-test:functional-789160
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-789160
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-789160 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (212.948358ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 kubectl -- --context functional-789160 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-789160 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.35s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-789160 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1019 12:17:46.044082  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-789160 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.348755584s)
functional_test.go:776: restart took 31.348914146s for "functional-789160" cluster.
I1019 12:18:02.095483  148701 config.go:182] Loaded profile config "functional-789160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (31.35s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-789160 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-789160 logs: (1.370922743s)
--- PASS: TestFunctional/serial/LogsCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 logs --file /tmp/TestFunctionalserialLogsFileCmd308061214/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-789160 logs --file /tmp/TestFunctionalserialLogsFileCmd308061214/001/logs.txt: (1.2989576s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.58s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-789160 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-789160
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-789160: exit status 115 (300.768445ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.121:32071 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-789160 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-789160 delete -f testdata/invalidsvc.yaml: (1.095953618s)
--- PASS: TestFunctional/serial/InvalidService (4.58s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-789160 config get cpus: exit status 14 (64.54954ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-789160 config get cpus: exit status 14 (56.010718ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-789160 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-789160 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 156910: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-789160 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-789160 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (133.546439ms)

                                                
                                                
-- stdout --
	* [functional-789160] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-144655/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-144655/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:18:12.985251  156716 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:18:12.985361  156716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:18:12.985370  156716 out.go:374] Setting ErrFile to fd 2...
	I1019 12:18:12.985373  156716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:18:12.985573  156716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-144655/.minikube/bin
	I1019 12:18:12.985988  156716 out.go:368] Setting JSON to false
	I1019 12:18:12.986946  156716 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3627,"bootTime":1760872666,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:18:12.987041  156716 start.go:141] virtualization: kvm guest
	I1019 12:18:12.988836  156716 out.go:179] * [functional-789160] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:18:12.989883  156716 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:18:12.989958  156716 notify.go:220] Checking for updates...
	I1019 12:18:12.991932  156716 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:18:12.993305  156716 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-144655/kubeconfig
	I1019 12:18:12.994315  156716 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-144655/.minikube
	I1019 12:18:12.995323  156716 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:18:12.996259  156716 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:18:12.997612  156716 config.go:182] Loaded profile config "functional-789160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:18:12.998005  156716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:18:12.998079  156716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:18:13.016480  156716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I1019 12:18:13.017152  156716 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:18:13.017700  156716 main.go:141] libmachine: Using API Version  1
	I1019 12:18:13.017724  156716 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:18:13.018116  156716 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:18:13.018328  156716 main.go:141] libmachine: (functional-789160) Calling .DriverName
	I1019 12:18:13.018599  156716 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:18:13.018927  156716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:18:13.018979  156716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:18:13.033012  156716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33511
	I1019 12:18:13.033627  156716 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:18:13.034076  156716 main.go:141] libmachine: Using API Version  1
	I1019 12:18:13.034098  156716 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:18:13.034510  156716 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:18:13.034686  156716 main.go:141] libmachine: (functional-789160) Calling .DriverName
	I1019 12:18:13.064595  156716 out.go:179] * Using the kvm2 driver based on existing profile
	I1019 12:18:13.065584  156716 start.go:305] selected driver: kvm2
	I1019 12:18:13.065599  156716 start.go:925] validating driver "kvm2" against &{Name:functional-789160 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-789160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:18:13.065701  156716 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:18:13.067395  156716 out.go:203] 
	W1019 12:18:13.068339  156716 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1019 12:18:13.069274  156716 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-789160 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-789160 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-789160 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (156.552267ms)

                                                
                                                
-- stdout --
	* [functional-789160] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-144655/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-144655/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:18:13.257073  156772 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:18:13.257167  156772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:18:13.257174  156772 out.go:374] Setting ErrFile to fd 2...
	I1019 12:18:13.257179  156772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:18:13.257525  156772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-144655/.minikube/bin
	I1019 12:18:13.258530  156772 out.go:368] Setting JSON to false
	I1019 12:18:13.259494  156772 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3627,"bootTime":1760872666,"procs":257,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:18:13.259585  156772 start.go:141] virtualization: kvm guest
	I1019 12:18:13.260869  156772 out.go:179] * [functional-789160] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1019 12:18:13.262490  156772 notify.go:220] Checking for updates...
	I1019 12:18:13.262492  156772 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:18:13.263753  156772 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:18:13.264813  156772 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-144655/kubeconfig
	I1019 12:18:13.266151  156772 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-144655/.minikube
	I1019 12:18:13.267276  156772 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:18:13.268506  156772 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:18:13.270167  156772 config.go:182] Loaded profile config "functional-789160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:18:13.270789  156772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:18:13.270850  156772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:18:13.289059  156772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39847
	I1019 12:18:13.289641  156772 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:18:13.290222  156772 main.go:141] libmachine: Using API Version  1
	I1019 12:18:13.290246  156772 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:18:13.290765  156772 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:18:13.290998  156772 main.go:141] libmachine: (functional-789160) Calling .DriverName
	I1019 12:18:13.291358  156772 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:18:13.291781  156772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:18:13.291837  156772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:18:13.306574  156772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35359
	I1019 12:18:13.307121  156772 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:18:13.307624  156772 main.go:141] libmachine: Using API Version  1
	I1019 12:18:13.307656  156772 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:18:13.308113  156772 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:18:13.308343  156772 main.go:141] libmachine: (functional-789160) Calling .DriverName
	I1019 12:18:13.348416  156772 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1019 12:18:13.349790  156772 start.go:305] selected driver: kvm2
	I1019 12:18:13.349809  156772 start.go:925] validating driver "kvm2" against &{Name:functional-789160 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-789160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:18:13.349934  156772 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:18:13.352341  156772 out.go:203] 
	W1019 12:18:13.353448  156772 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1019 12:18:13.354482  156772 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (22.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-789160 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-789160 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-k8zrp" [e7c9f0a9-f0bb-4a07-bef3-21c89635352e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-k8zrp" [e7c9f0a9-f0bb-4a07-bef3-21c89635352e] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 22.006019452s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.121:31110
functional_test.go:1680: http://192.168.39.121:31110: success! body:
Request served by hello-node-connect-7d85dfc575-k8zrp

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.121:31110
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (22.52s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [2e8455a8-4a28-4385-84e7-55d741ebf748] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006122203s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-789160 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-789160 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-789160 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-789160 apply -f testdata/storage-provisioner/pod.yaml
I1019 12:18:26.306873  148701 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [dc0a55ad-dfb2-43fb-a228-186934b920b3] Pending
helpers_test.go:352: "sp-pod" [dc0a55ad-dfb2-43fb-a228-186934b920b3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2025/10/19 12:18:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "sp-pod" [dc0a55ad-dfb2-43fb-a228-186934b920b3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.003555827s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-789160 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-789160 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-789160 apply -f testdata/storage-provisioner/pod.yaml
I1019 12:18:54.288380  148701 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8b585b0c-a961-495b-b268-016fbc00f0f3] Pending
helpers_test.go:352: "sp-pod" [8b585b0c-a961-495b-b268-016fbc00f0f3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [8b585b0c-a961-495b-b268-016fbc00f0f3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003977702s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-789160 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.71s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh -n functional-789160 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 cp functional-789160:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1839818774/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh -n functional-789160 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh -n functional-789160 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-789160 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-fpbtq" [2cb6524e-8771-49c0-8c9b-2e7ac7fae437] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-fpbtq" [2cb6524e-8771-49c0-8c9b-2e7ac7fae437] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.522970157s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-789160 exec mysql-5bb876957f-fpbtq -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-789160 exec mysql-5bb876957f-fpbtq -- mysql -ppassword -e "show databases;": exit status 1 (164.233784ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1019 12:18:45.144210  148701 retry.go:31] will retry after 889.824339ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-789160 exec mysql-5bb876957f-fpbtq -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-789160 exec mysql-5bb876957f-fpbtq -- mysql -ppassword -e "show databases;": exit status 1 (205.959719ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1019 12:18:46.241102  148701 retry.go:31] will retry after 937.93466ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-789160 exec mysql-5bb876957f-fpbtq -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.03s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/148701/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "sudo cat /etc/test/nested/copy/148701/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/148701.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "sudo cat /etc/ssl/certs/148701.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/148701.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "sudo cat /usr/share/ca-certificates/148701.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1487012.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "sudo cat /etc/ssl/certs/1487012.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1487012.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "sudo cat /usr/share/ca-certificates/1487012.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-789160 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-789160 ssh "sudo systemctl is-active docker": exit status 1 (211.021407ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-789160 ssh "sudo systemctl is-active containerd": exit status 1 (218.950407ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-789160 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-789160 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-n2pjv" [77abc38f-7860-4cc8-bc50-e931d6375667] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-n2pjv" [77abc38f-7860-4cc8-bc50-e931d6375667] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.009434538s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "318.689342ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "54.68198ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-789160 /tmp/TestFunctionalparallelMountCmdany-port1636447653/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760876290403216712" to /tmp/TestFunctionalparallelMountCmdany-port1636447653/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760876290403216712" to /tmp/TestFunctionalparallelMountCmdany-port1636447653/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760876290403216712" to /tmp/TestFunctionalparallelMountCmdany-port1636447653/001/test-1760876290403216712
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-789160 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (219.403502ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 12:18:10.623124  148701 retry.go:31] will retry after 268.738165ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 19 12:18 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 19 12:18 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 19 12:18 test-1760876290403216712
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh cat /mount-9p/test-1760876290403216712
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-789160 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [b4e76189-2476-4596-b2b5-541218b23e9b] Pending
helpers_test.go:352: "busybox-mount" [b4e76189-2476-4596-b2b5-541218b23e9b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [b4e76189-2476-4596-b2b5-541218b23e9b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [b4e76189-2476-4596-b2b5-541218b23e9b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.005629798s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-789160 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-789160 /tmp/TestFunctionalparallelMountCmdany-port1636447653/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "291.546127ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "56.98478ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-789160 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-789160
localhost/kicbase/echo-server:functional-789160
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-789160 image ls --format short --alsologtostderr:
I1019 12:18:30.740737  158137 out.go:360] Setting OutFile to fd 1 ...
I1019 12:18:30.741097  158137 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:18:30.741114  158137 out.go:374] Setting ErrFile to fd 2...
I1019 12:18:30.741122  158137 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:18:30.741461  158137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-144655/.minikube/bin
I1019 12:18:30.742414  158137 config.go:182] Loaded profile config "functional-789160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:18:30.742572  158137 config.go:182] Loaded profile config "functional-789160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:18:30.743180  158137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1019 12:18:30.743270  158137 main.go:141] libmachine: Launching plugin server for driver kvm2
I1019 12:18:30.757327  158137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37243
I1019 12:18:30.757865  158137 main.go:141] libmachine: () Calling .GetVersion
I1019 12:18:30.758472  158137 main.go:141] libmachine: Using API Version  1
I1019 12:18:30.758499  158137 main.go:141] libmachine: () Calling .SetConfigRaw
I1019 12:18:30.758957  158137 main.go:141] libmachine: () Calling .GetMachineName
I1019 12:18:30.759232  158137 main.go:141] libmachine: (functional-789160) Calling .GetState
I1019 12:18:30.761594  158137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1019 12:18:30.761645  158137 main.go:141] libmachine: Launching plugin server for driver kvm2
I1019 12:18:30.775684  158137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43879
I1019 12:18:30.776185  158137 main.go:141] libmachine: () Calling .GetVersion
I1019 12:18:30.776800  158137 main.go:141] libmachine: Using API Version  1
I1019 12:18:30.776832  158137 main.go:141] libmachine: () Calling .SetConfigRaw
I1019 12:18:30.777174  158137 main.go:141] libmachine: () Calling .GetMachineName
I1019 12:18:30.777381  158137 main.go:141] libmachine: (functional-789160) Calling .DriverName
I1019 12:18:30.777659  158137 ssh_runner.go:195] Run: systemctl --version
I1019 12:18:30.777692  158137 main.go:141] libmachine: (functional-789160) Calling .GetSSHHostname
I1019 12:18:30.781106  158137 main.go:141] libmachine: (functional-789160) DBG | domain functional-789160 has defined MAC address 52:54:00:d2:6f:a2 in network mk-functional-789160
I1019 12:18:30.781582  158137 main.go:141] libmachine: (functional-789160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:6f:a2", ip: ""} in network mk-functional-789160: {Iface:virbr1 ExpiryTime:2025-10-19 13:15:37 +0000 UTC Type:0 Mac:52:54:00:d2:6f:a2 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-789160 Clientid:01:52:54:00:d2:6f:a2}
I1019 12:18:30.781610  158137 main.go:141] libmachine: (functional-789160) DBG | domain functional-789160 has defined IP address 192.168.39.121 and MAC address 52:54:00:d2:6f:a2 in network mk-functional-789160
I1019 12:18:30.781813  158137 main.go:141] libmachine: (functional-789160) Calling .GetSSHPort
I1019 12:18:30.782010  158137 main.go:141] libmachine: (functional-789160) Calling .GetSSHKeyPath
I1019 12:18:30.782204  158137 main.go:141] libmachine: (functional-789160) Calling .GetSSHUsername
I1019 12:18:30.782380  158137 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/functional-789160/id_rsa Username:docker}
I1019 12:18:30.881637  158137 ssh_runner.go:195] Run: sudo crictl images --output json
I1019 12:18:31.215003  158137 main.go:141] libmachine: Making call to close driver server
I1019 12:18:31.215023  158137 main.go:141] libmachine: (functional-789160) Calling .Close
I1019 12:18:31.215350  158137 main.go:141] libmachine: Successfully made call to close driver server
I1019 12:18:31.215373  158137 main.go:141] libmachine: Making call to close connection to plugin binary
I1019 12:18:31.215393  158137 main.go:141] libmachine: Making call to close driver server
I1019 12:18:31.215411  158137 main.go:141] libmachine: (functional-789160) Calling .Close
I1019 12:18:31.215418  158137 main.go:141] libmachine: (functional-789160) DBG | Closing plugin on server side
I1019 12:18:31.215649  158137 main.go:141] libmachine: (functional-789160) DBG | Closing plugin on server side
I1019 12:18:31.215727  158137 main.go:141] libmachine: Successfully made call to close driver server
I1019 12:18:31.215788  158137 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-789160 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-789160  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/minikube-local-cache-test     │ functional-789160  │ ca635915b7583 │ 3.33kB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ localhost/my-image                      │ functional-789160  │ 673077fcacda6 │ 1.47MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-789160 image ls --format table --alsologtostderr:
I1019 12:18:40.256751  158300 out.go:360] Setting OutFile to fd 1 ...
I1019 12:18:40.256996  158300 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:18:40.257004  158300 out.go:374] Setting ErrFile to fd 2...
I1019 12:18:40.257008  158300 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:18:40.257228  158300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-144655/.minikube/bin
I1019 12:18:40.257808  158300 config.go:182] Loaded profile config "functional-789160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:18:40.257898  158300 config.go:182] Loaded profile config "functional-789160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:18:40.258269  158300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1019 12:18:40.258351  158300 main.go:141] libmachine: Launching plugin server for driver kvm2
I1019 12:18:40.272715  158300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45647
I1019 12:18:40.273273  158300 main.go:141] libmachine: () Calling .GetVersion
I1019 12:18:40.273848  158300 main.go:141] libmachine: Using API Version  1
I1019 12:18:40.273873  158300 main.go:141] libmachine: () Calling .SetConfigRaw
I1019 12:18:40.274261  158300 main.go:141] libmachine: () Calling .GetMachineName
I1019 12:18:40.274471  158300 main.go:141] libmachine: (functional-789160) Calling .GetState
I1019 12:18:40.276514  158300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1019 12:18:40.276551  158300 main.go:141] libmachine: Launching plugin server for driver kvm2
I1019 12:18:40.290051  158300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37647
I1019 12:18:40.290582  158300 main.go:141] libmachine: () Calling .GetVersion
I1019 12:18:40.290968  158300 main.go:141] libmachine: Using API Version  1
I1019 12:18:40.290986  158300 main.go:141] libmachine: () Calling .SetConfigRaw
I1019 12:18:40.291428  158300 main.go:141] libmachine: () Calling .GetMachineName
I1019 12:18:40.291626  158300 main.go:141] libmachine: (functional-789160) Calling .DriverName
I1019 12:18:40.291832  158300 ssh_runner.go:195] Run: systemctl --version
I1019 12:18:40.291858  158300 main.go:141] libmachine: (functional-789160) Calling .GetSSHHostname
I1019 12:18:40.294742  158300 main.go:141] libmachine: (functional-789160) DBG | domain functional-789160 has defined MAC address 52:54:00:d2:6f:a2 in network mk-functional-789160
I1019 12:18:40.295153  158300 main.go:141] libmachine: (functional-789160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:6f:a2", ip: ""} in network mk-functional-789160: {Iface:virbr1 ExpiryTime:2025-10-19 13:15:37 +0000 UTC Type:0 Mac:52:54:00:d2:6f:a2 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-789160 Clientid:01:52:54:00:d2:6f:a2}
I1019 12:18:40.295179  158300 main.go:141] libmachine: (functional-789160) DBG | domain functional-789160 has defined IP address 192.168.39.121 and MAC address 52:54:00:d2:6f:a2 in network mk-functional-789160
I1019 12:18:40.295387  158300 main.go:141] libmachine: (functional-789160) Calling .GetSSHPort
I1019 12:18:40.295535  158300 main.go:141] libmachine: (functional-789160) Calling .GetSSHKeyPath
I1019 12:18:40.295627  158300 main.go:141] libmachine: (functional-789160) Calling .GetSSHUsername
I1019 12:18:40.295754  158300 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/functional-789160/id_rsa Username:docker}
I1019 12:18:40.377746  158300 ssh_runner.go:195] Run: sudo crictl images --output json
I1019 12:18:40.416399  158300 main.go:141] libmachine: Making call to close driver server
I1019 12:18:40.416427  158300 main.go:141] libmachine: (functional-789160) Calling .Close
I1019 12:18:40.416716  158300 main.go:141] libmachine: (functional-789160) DBG | Closing plugin on server side
I1019 12:18:40.416729  158300 main.go:141] libmachine: Successfully made call to close driver server
I1019 12:18:40.416740  158300 main.go:141] libmachine: Making call to close connection to plugin binary
I1019 12:18:40.416756  158300 main.go:141] libmachine: Making call to close driver server
I1019 12:18:40.416768  158300 main.go:141] libmachine: (functional-789160) Calling .Close
I1019 12:18:40.417048  158300 main.go:141] libmachine: Successfully made call to close driver server
I1019 12:18:40.417072  158300 main.go:141] libmachine: (functional-789160) DBG | Closing plugin on server side
I1019 12:18:40.417090  158300 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-789160 image ls --format json --alsologtostderr:
[{"id":"673077fcacda6fe458a321b777de9645076d36bb3709484bf7c7c5c473596dc9","repoDigests":["localhost/my-image@sha256:d659d9d28f5f3b3b9528732350462bf2212ad7b4fc3c43a8cced817d0c268f6c"],"repoTags":["localhost/my-image:functional-789160"],"size":"1468597"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a24
9e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/
k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"ca635915b7583c468465fcf5d93c14c89221905cd3517ae2fbe69d3140224762","repoDigests":["localhost/minikube-local-cache-test@sha256:d46565950de39bbad122603642cac96c05c4cf03b0e8735798a7fbd09e741505"],"repoTags":["localhost/minikube-local-cache-test:functional-789160"],"size":"3328"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"fc25172553d79197ecd840ec8dba1fba68330
079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/
dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io
/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"9292914047fef054e7b9bd696c2f035808a0ac6158c020a3230340ace773be74","repoDigests":["docker.io/library/0bc37c850b429ea1609ff92889bb866e7cb75b85b9d153181b9c9370c22daae3-tmp@sha256:60ef0a85970abb8a342118ead3da004e273a84a057b5fd18f5be899e2be1301e"],"repoTags":[],"size":"1466017"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eb
a7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-789160"],"size":"4944818"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3b
db1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-789160 image ls --format json --alsologtostderr:
I1019 12:18:40.047129  158276 out.go:360] Setting OutFile to fd 1 ...
I1019 12:18:40.047409  158276 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:18:40.047421  158276 out.go:374] Setting ErrFile to fd 2...
I1019 12:18:40.047425  158276 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:18:40.047617  158276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-144655/.minikube/bin
I1019 12:18:40.048271  158276 config.go:182] Loaded profile config "functional-789160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:18:40.048388  158276 config.go:182] Loaded profile config "functional-789160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:18:40.048757  158276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1019 12:18:40.048811  158276 main.go:141] libmachine: Launching plugin server for driver kvm2
I1019 12:18:40.062490  158276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40297
I1019 12:18:40.063066  158276 main.go:141] libmachine: () Calling .GetVersion
I1019 12:18:40.063691  158276 main.go:141] libmachine: Using API Version  1
I1019 12:18:40.063721  158276 main.go:141] libmachine: () Calling .SetConfigRaw
I1019 12:18:40.064068  158276 main.go:141] libmachine: () Calling .GetMachineName
I1019 12:18:40.064383  158276 main.go:141] libmachine: (functional-789160) Calling .GetState
I1019 12:18:40.066332  158276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1019 12:18:40.066371  158276 main.go:141] libmachine: Launching plugin server for driver kvm2
I1019 12:18:40.079732  158276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41149
I1019 12:18:40.080161  158276 main.go:141] libmachine: () Calling .GetVersion
I1019 12:18:40.080892  158276 main.go:141] libmachine: Using API Version  1
I1019 12:18:40.080922  158276 main.go:141] libmachine: () Calling .SetConfigRaw
I1019 12:18:40.081339  158276 main.go:141] libmachine: () Calling .GetMachineName
I1019 12:18:40.081557  158276 main.go:141] libmachine: (functional-789160) Calling .DriverName
I1019 12:18:40.081781  158276 ssh_runner.go:195] Run: systemctl --version
I1019 12:18:40.081818  158276 main.go:141] libmachine: (functional-789160) Calling .GetSSHHostname
I1019 12:18:40.085094  158276 main.go:141] libmachine: (functional-789160) DBG | domain functional-789160 has defined MAC address 52:54:00:d2:6f:a2 in network mk-functional-789160
I1019 12:18:40.085552  158276 main.go:141] libmachine: (functional-789160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:6f:a2", ip: ""} in network mk-functional-789160: {Iface:virbr1 ExpiryTime:2025-10-19 13:15:37 +0000 UTC Type:0 Mac:52:54:00:d2:6f:a2 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-789160 Clientid:01:52:54:00:d2:6f:a2}
I1019 12:18:40.085587  158276 main.go:141] libmachine: (functional-789160) DBG | domain functional-789160 has defined IP address 192.168.39.121 and MAC address 52:54:00:d2:6f:a2 in network mk-functional-789160
I1019 12:18:40.085708  158276 main.go:141] libmachine: (functional-789160) Calling .GetSSHPort
I1019 12:18:40.085887  158276 main.go:141] libmachine: (functional-789160) Calling .GetSSHKeyPath
I1019 12:18:40.086039  158276 main.go:141] libmachine: (functional-789160) Calling .GetSSHUsername
I1019 12:18:40.086187  158276 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/functional-789160/id_rsa Username:docker}
I1019 12:18:40.164884  158276 ssh_runner.go:195] Run: sudo crictl images --output json
I1019 12:18:40.203258  158276 main.go:141] libmachine: Making call to close driver server
I1019 12:18:40.203272  158276 main.go:141] libmachine: (functional-789160) Calling .Close
I1019 12:18:40.203610  158276 main.go:141] libmachine: Successfully made call to close driver server
I1019 12:18:40.203629  158276 main.go:141] libmachine: Making call to close connection to plugin binary
I1019 12:18:40.203638  158276 main.go:141] libmachine: Making call to close driver server
I1019 12:18:40.203646  158276 main.go:141] libmachine: (functional-789160) Calling .Close
I1019 12:18:40.203654  158276 main.go:141] libmachine: (functional-789160) DBG | Closing plugin on server side
I1019 12:18:40.203902  158276 main.go:141] libmachine: Successfully made call to close driver server
I1019 12:18:40.203919  158276 main.go:141] libmachine: Making call to close connection to plugin binary
I1019 12:18:40.203933  158276 main.go:141] libmachine: (functional-789160) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-789160 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: ca635915b7583c468465fcf5d93c14c89221905cd3517ae2fbe69d3140224762
repoDigests:
- localhost/minikube-local-cache-test@sha256:d46565950de39bbad122603642cac96c05c4cf03b0e8735798a7fbd09e741505
repoTags:
- localhost/minikube-local-cache-test:functional-789160
size: "3328"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-789160
size: "4944818"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-789160 image ls --format yaml --alsologtostderr:
I1019 12:18:31.361946  158160 out.go:360] Setting OutFile to fd 1 ...
I1019 12:18:31.363529  158160 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:18:31.363550  158160 out.go:374] Setting ErrFile to fd 2...
I1019 12:18:31.363558  158160 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:18:31.364072  158160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-144655/.minikube/bin
I1019 12:18:31.364984  158160 config.go:182] Loaded profile config "functional-789160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:18:31.365130  158160 config.go:182] Loaded profile config "functional-789160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:18:31.365759  158160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1019 12:18:31.365818  158160 main.go:141] libmachine: Launching plugin server for driver kvm2
I1019 12:18:31.380209  158160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42509
I1019 12:18:31.380809  158160 main.go:141] libmachine: () Calling .GetVersion
I1019 12:18:31.381638  158160 main.go:141] libmachine: Using API Version  1
I1019 12:18:31.381670  158160 main.go:141] libmachine: () Calling .SetConfigRaw
I1019 12:18:31.382084  158160 main.go:141] libmachine: () Calling .GetMachineName
I1019 12:18:31.382302  158160 main.go:141] libmachine: (functional-789160) Calling .GetState
I1019 12:18:31.384674  158160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1019 12:18:31.384726  158160 main.go:141] libmachine: Launching plugin server for driver kvm2
I1019 12:18:31.399267  158160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42329
I1019 12:18:31.399872  158160 main.go:141] libmachine: () Calling .GetVersion
I1019 12:18:31.400520  158160 main.go:141] libmachine: Using API Version  1
I1019 12:18:31.400551  158160 main.go:141] libmachine: () Calling .SetConfigRaw
I1019 12:18:31.400852  158160 main.go:141] libmachine: () Calling .GetMachineName
I1019 12:18:31.401157  158160 main.go:141] libmachine: (functional-789160) Calling .DriverName
I1019 12:18:31.401408  158160 ssh_runner.go:195] Run: systemctl --version
I1019 12:18:31.401464  158160 main.go:141] libmachine: (functional-789160) Calling .GetSSHHostname
I1019 12:18:31.404906  158160 main.go:141] libmachine: (functional-789160) DBG | domain functional-789160 has defined MAC address 52:54:00:d2:6f:a2 in network mk-functional-789160
I1019 12:18:31.405411  158160 main.go:141] libmachine: (functional-789160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:6f:a2", ip: ""} in network mk-functional-789160: {Iface:virbr1 ExpiryTime:2025-10-19 13:15:37 +0000 UTC Type:0 Mac:52:54:00:d2:6f:a2 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-789160 Clientid:01:52:54:00:d2:6f:a2}
I1019 12:18:31.405437  158160 main.go:141] libmachine: (functional-789160) DBG | domain functional-789160 has defined IP address 192.168.39.121 and MAC address 52:54:00:d2:6f:a2 in network mk-functional-789160
I1019 12:18:31.405649  158160 main.go:141] libmachine: (functional-789160) Calling .GetSSHPort
I1019 12:18:31.405841  158160 main.go:141] libmachine: (functional-789160) Calling .GetSSHKeyPath
I1019 12:18:31.406040  158160 main.go:141] libmachine: (functional-789160) Calling .GetSSHUsername
I1019 12:18:31.406202  158160 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/functional-789160/id_rsa Username:docker}
I1019 12:18:31.504873  158160 ssh_runner.go:195] Run: sudo crictl images --output json
I1019 12:18:31.587028  158160 main.go:141] libmachine: Making call to close driver server
I1019 12:18:31.587045  158160 main.go:141] libmachine: (functional-789160) Calling .Close
I1019 12:18:31.587371  158160 main.go:141] libmachine: Successfully made call to close driver server
I1019 12:18:31.587388  158160 main.go:141] libmachine: Making call to close connection to plugin binary
I1019 12:18:31.587400  158160 main.go:141] libmachine: Making call to close driver server
I1019 12:18:31.587409  158160 main.go:141] libmachine: (functional-789160) Calling .Close
I1019 12:18:31.587416  158160 main.go:141] libmachine: (functional-789160) DBG | Closing plugin on server side
I1019 12:18:31.587664  158160 main.go:141] libmachine: Successfully made call to close driver server
I1019 12:18:31.587706  158160 main.go:141] libmachine: (functional-789160) DBG | Closing plugin on server side
I1019 12:18:31.587710  158160 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (8.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-789160 ssh pgrep buildkitd: exit status 1 (238.795548ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 image build -t localhost/my-image:functional-789160 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-789160 image build -t localhost/my-image:functional-789160 testdata/build --alsologtostderr: (7.939719923s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-789160 image build -t localhost/my-image:functional-789160 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9292914047f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-789160
--> 673077fcacd
Successfully tagged localhost/my-image:functional-789160
673077fcacda6fe458a321b777de9645076d36bb3709484bf7c7c5c473596dc9
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-789160 image build -t localhost/my-image:functional-789160 testdata/build --alsologtostderr:
I1019 12:18:31.886483  158213 out.go:360] Setting OutFile to fd 1 ...
I1019 12:18:31.886765  158213 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:18:31.886776  158213 out.go:374] Setting ErrFile to fd 2...
I1019 12:18:31.886780  158213 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:18:31.886972  158213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-144655/.minikube/bin
I1019 12:18:31.887583  158213 config.go:182] Loaded profile config "functional-789160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:18:31.888478  158213 config.go:182] Loaded profile config "functional-789160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:18:31.888809  158213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1019 12:18:31.888854  158213 main.go:141] libmachine: Launching plugin server for driver kvm2
I1019 12:18:31.902865  158213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38607
I1019 12:18:31.903424  158213 main.go:141] libmachine: () Calling .GetVersion
I1019 12:18:31.903925  158213 main.go:141] libmachine: Using API Version  1
I1019 12:18:31.903946  158213 main.go:141] libmachine: () Calling .SetConfigRaw
I1019 12:18:31.904326  158213 main.go:141] libmachine: () Calling .GetMachineName
I1019 12:18:31.904547  158213 main.go:141] libmachine: (functional-789160) Calling .GetState
I1019 12:18:31.906654  158213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1019 12:18:31.906712  158213 main.go:141] libmachine: Launching plugin server for driver kvm2
I1019 12:18:31.920536  158213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40957
I1019 12:18:31.921045  158213 main.go:141] libmachine: () Calling .GetVersion
I1019 12:18:31.921513  158213 main.go:141] libmachine: Using API Version  1
I1019 12:18:31.921534  158213 main.go:141] libmachine: () Calling .SetConfigRaw
I1019 12:18:31.921838  158213 main.go:141] libmachine: () Calling .GetMachineName
I1019 12:18:31.922039  158213 main.go:141] libmachine: (functional-789160) Calling .DriverName
I1019 12:18:31.922247  158213 ssh_runner.go:195] Run: systemctl --version
I1019 12:18:31.922274  158213 main.go:141] libmachine: (functional-789160) Calling .GetSSHHostname
I1019 12:18:31.925917  158213 main.go:141] libmachine: (functional-789160) DBG | domain functional-789160 has defined MAC address 52:54:00:d2:6f:a2 in network mk-functional-789160
I1019 12:18:31.926550  158213 main.go:141] libmachine: (functional-789160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:6f:a2", ip: ""} in network mk-functional-789160: {Iface:virbr1 ExpiryTime:2025-10-19 13:15:37 +0000 UTC Type:0 Mac:52:54:00:d2:6f:a2 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-789160 Clientid:01:52:54:00:d2:6f:a2}
I1019 12:18:31.926593  158213 main.go:141] libmachine: (functional-789160) DBG | domain functional-789160 has defined IP address 192.168.39.121 and MAC address 52:54:00:d2:6f:a2 in network mk-functional-789160
I1019 12:18:31.926683  158213 main.go:141] libmachine: (functional-789160) Calling .GetSSHPort
I1019 12:18:31.926840  158213 main.go:141] libmachine: (functional-789160) Calling .GetSSHKeyPath
I1019 12:18:31.926970  158213 main.go:141] libmachine: (functional-789160) Calling .GetSSHUsername
I1019 12:18:31.927109  158213 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/functional-789160/id_rsa Username:docker}
I1019 12:18:32.026553  158213 build_images.go:161] Building image from path: /tmp/build.470824481.tar
I1019 12:18:32.026629  158213 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1019 12:18:32.045013  158213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.470824481.tar
I1019 12:18:32.052031  158213 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.470824481.tar: stat -c "%s %y" /var/lib/minikube/build/build.470824481.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.470824481.tar': No such file or directory
I1019 12:18:32.052078  158213 ssh_runner.go:362] scp /tmp/build.470824481.tar --> /var/lib/minikube/build/build.470824481.tar (3072 bytes)
I1019 12:18:32.110221  158213 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.470824481
I1019 12:18:32.130423  158213 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.470824481 -xf /var/lib/minikube/build/build.470824481.tar
I1019 12:18:32.147640  158213 crio.go:315] Building image: /var/lib/minikube/build/build.470824481
I1019 12:18:32.147710  158213 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-789160 /var/lib/minikube/build/build.470824481 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1019 12:18:39.741762  158213 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-789160 /var/lib/minikube/build/build.470824481 --cgroup-manager=cgroupfs: (7.594002828s)
I1019 12:18:39.741892  158213 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.470824481
I1019 12:18:39.754637  158213 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.470824481.tar
I1019 12:18:39.767680  158213 build_images.go:217] Built localhost/my-image:functional-789160 from /tmp/build.470824481.tar
I1019 12:18:39.767711  158213 build_images.go:133] succeeded building to: functional-789160
I1019 12:18:39.767715  158213 build_images.go:134] failed building to: 
I1019 12:18:39.767740  158213 main.go:141] libmachine: Making call to close driver server
I1019 12:18:39.767766  158213 main.go:141] libmachine: (functional-789160) Calling .Close
I1019 12:18:39.768011  158213 main.go:141] libmachine: Successfully made call to close driver server
I1019 12:18:39.768028  158213 main.go:141] libmachine: Making call to close connection to plugin binary
I1019 12:18:39.768059  158213 main.go:141] libmachine: (functional-789160) DBG | Closing plugin on server side
I1019 12:18:39.768187  158213 main.go:141] libmachine: Making call to close driver server
I1019 12:18:39.768215  158213 main.go:141] libmachine: (functional-789160) Calling .Close
I1019 12:18:39.768481  158213 main.go:141] libmachine: Successfully made call to close driver server
I1019 12:18:39.768496  158213 main.go:141] libmachine: Making call to close connection to plugin binary
I1019 12:18:39.768518  158213 main.go:141] libmachine: (functional-789160) DBG | Closing plugin on server side
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (8.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.913158885s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-789160
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 image load --daemon kicbase/echo-server:functional-789160 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-789160 image load --daemon kicbase/echo-server:functional-789160 --alsologtostderr: (1.155846885s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 image load --daemon kicbase/echo-server:functional-789160 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-789160
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 image load --daemon kicbase/echo-server:functional-789160 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 image save kicbase/echo-server:functional-789160 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 image rm kicbase/echo-server:functional-789160 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 service list -o json
functional_test.go:1504: Took "514.96568ms" to run "out/minikube-linux-amd64 -p functional-789160 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-789160
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 image save --daemon kicbase/echo-server:functional-789160 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-789160
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.121:30171
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-789160 /tmp/TestFunctionalparallelMountCmdspecific-port3707843959/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-789160 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (294.033197ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 12:18:20.090769  148701 retry.go:31] will retry after 298.546431ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-789160 /tmp/TestFunctionalparallelMountCmdspecific-port3707843959/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-789160 ssh "sudo umount -f /mount-9p": exit status 1 (190.050657ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-789160 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-789160 /tmp/TestFunctionalparallelMountCmdspecific-port3707843959/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.121:30171
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-789160 /tmp/TestFunctionalparallelMountCmdVerifyCleanup724658903/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-789160 /tmp/TestFunctionalparallelMountCmdVerifyCleanup724658903/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-789160 /tmp/TestFunctionalparallelMountCmdVerifyCleanup724658903/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-789160 ssh "findmnt -T" /mount1: exit status 1 (243.936788ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 12:18:21.665781  148701 retry.go:31] will retry after 401.257412ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-789160 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-789160 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-789160 /tmp/TestFunctionalparallelMountCmdVerifyCleanup724658903/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-789160 /tmp/TestFunctionalparallelMountCmdVerifyCleanup724658903/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-789160 /tmp/TestFunctionalparallelMountCmdVerifyCleanup724658903/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-789160
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-789160
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-789160
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (219.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1019 12:20:02.177002  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:20:29.886138  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-930506 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m38.726111149s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (219.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-930506 kubectl -- rollout status deployment/busybox: (6.267506693s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 kubectl -- exec busybox-7b57f96db7-d9vwh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 kubectl -- exec busybox-7b57f96db7-k9w8n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 kubectl -- exec busybox-7b57f96db7-wcshj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 kubectl -- exec busybox-7b57f96db7-d9vwh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 kubectl -- exec busybox-7b57f96db7-k9w8n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 kubectl -- exec busybox-7b57f96db7-wcshj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 kubectl -- exec busybox-7b57f96db7-d9vwh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 kubectl -- exec busybox-7b57f96db7-k9w8n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 kubectl -- exec busybox-7b57f96db7-wcshj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 kubectl -- exec busybox-7b57f96db7-d9vwh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 kubectl -- exec busybox-7b57f96db7-d9vwh -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 kubectl -- exec busybox-7b57f96db7-k9w8n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 kubectl -- exec busybox-7b57f96db7-k9w8n -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 kubectl -- exec busybox-7b57f96db7-wcshj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 kubectl -- exec busybox-7b57f96db7-wcshj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (43.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 node add --alsologtostderr -v 5
E1019 12:23:09.603151  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:23:09.609583  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:23:09.621038  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:23:09.642460  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:23:09.683959  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:23:09.765787  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:23:09.927865  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:23:10.249378  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:23:10.891250  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:23:12.173106  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:23:14.735274  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:23:19.856727  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:23:30.099107  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-930506 node add --alsologtostderr -v 5: (43.128921762s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (43.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-930506 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 cp testdata/cp-test.txt ha-930506:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 cp ha-930506:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3593630430/001/cp-test_ha-930506.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 cp ha-930506:/home/docker/cp-test.txt ha-930506-m02:/home/docker/cp-test_ha-930506_ha-930506-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m02 "sudo cat /home/docker/cp-test_ha-930506_ha-930506-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 cp ha-930506:/home/docker/cp-test.txt ha-930506-m03:/home/docker/cp-test_ha-930506_ha-930506-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m03 "sudo cat /home/docker/cp-test_ha-930506_ha-930506-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 cp ha-930506:/home/docker/cp-test.txt ha-930506-m04:/home/docker/cp-test_ha-930506_ha-930506-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m04 "sudo cat /home/docker/cp-test_ha-930506_ha-930506-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 cp testdata/cp-test.txt ha-930506-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 cp ha-930506-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3593630430/001/cp-test_ha-930506-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 cp ha-930506-m02:/home/docker/cp-test.txt ha-930506:/home/docker/cp-test_ha-930506-m02_ha-930506.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506 "sudo cat /home/docker/cp-test_ha-930506-m02_ha-930506.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 cp ha-930506-m02:/home/docker/cp-test.txt ha-930506-m03:/home/docker/cp-test_ha-930506-m02_ha-930506-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m03 "sudo cat /home/docker/cp-test_ha-930506-m02_ha-930506-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 cp ha-930506-m02:/home/docker/cp-test.txt ha-930506-m04:/home/docker/cp-test_ha-930506-m02_ha-930506-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m04 "sudo cat /home/docker/cp-test_ha-930506-m02_ha-930506-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 cp testdata/cp-test.txt ha-930506-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 cp ha-930506-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3593630430/001/cp-test_ha-930506-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 cp ha-930506-m03:/home/docker/cp-test.txt ha-930506:/home/docker/cp-test_ha-930506-m03_ha-930506.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506 "sudo cat /home/docker/cp-test_ha-930506-m03_ha-930506.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 cp ha-930506-m03:/home/docker/cp-test.txt ha-930506-m02:/home/docker/cp-test_ha-930506-m03_ha-930506-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m02 "sudo cat /home/docker/cp-test_ha-930506-m03_ha-930506-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 cp ha-930506-m03:/home/docker/cp-test.txt ha-930506-m04:/home/docker/cp-test_ha-930506-m03_ha-930506-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m04 "sudo cat /home/docker/cp-test_ha-930506-m03_ha-930506-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 cp testdata/cp-test.txt ha-930506-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 cp ha-930506-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3593630430/001/cp-test_ha-930506-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 cp ha-930506-m04:/home/docker/cp-test.txt ha-930506:/home/docker/cp-test_ha-930506-m04_ha-930506.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506 "sudo cat /home/docker/cp-test_ha-930506-m04_ha-930506.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 cp ha-930506-m04:/home/docker/cp-test.txt ha-930506-m02:/home/docker/cp-test_ha-930506-m04_ha-930506-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m02 "sudo cat /home/docker/cp-test_ha-930506-m04_ha-930506-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 cp ha-930506-m04:/home/docker/cp-test.txt ha-930506-m03:/home/docker/cp-test_ha-930506-m04_ha-930506-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 ssh -n ha-930506-m03 "sudo cat /home/docker/cp-test_ha-930506-m04_ha-930506-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (90.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 node stop m02 --alsologtostderr -v 5
E1019 12:23:50.580409  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:24:31.543500  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:25:02.177207  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-930506 node stop m02 --alsologtostderr -v 5: (1m29.817686733s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-930506 status --alsologtostderr -v 5: exit status 7 (665.94912ms)

                                                
                                                
-- stdout --
	ha-930506
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-930506-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-930506-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-930506-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:25:19.420502  163073 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:25:19.420618  163073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:25:19.420629  163073 out.go:374] Setting ErrFile to fd 2...
	I1019 12:25:19.420633  163073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:25:19.420854  163073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-144655/.minikube/bin
	I1019 12:25:19.421071  163073 out.go:368] Setting JSON to false
	I1019 12:25:19.421099  163073 mustload.go:65] Loading cluster: ha-930506
	I1019 12:25:19.421231  163073 notify.go:220] Checking for updates...
	I1019 12:25:19.421601  163073 config.go:182] Loaded profile config "ha-930506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:25:19.421621  163073 status.go:174] checking status of ha-930506 ...
	I1019 12:25:19.422100  163073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:25:19.422151  163073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:25:19.441432  163073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37911
	I1019 12:25:19.442095  163073 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:25:19.442906  163073 main.go:141] libmachine: Using API Version  1
	I1019 12:25:19.442965  163073 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:25:19.443322  163073 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:25:19.443551  163073 main.go:141] libmachine: (ha-930506) Calling .GetState
	I1019 12:25:19.445345  163073 status.go:371] ha-930506 host status = "Running" (err=<nil>)
	I1019 12:25:19.445364  163073 host.go:66] Checking if "ha-930506" exists ...
	I1019 12:25:19.445686  163073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:25:19.445748  163073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:25:19.459851  163073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39769
	I1019 12:25:19.460544  163073 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:25:19.461046  163073 main.go:141] libmachine: Using API Version  1
	I1019 12:25:19.461073  163073 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:25:19.461493  163073 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:25:19.461688  163073 main.go:141] libmachine: (ha-930506) Calling .GetIP
	I1019 12:25:19.465326  163073 main.go:141] libmachine: (ha-930506) DBG | domain ha-930506 has defined MAC address 52:54:00:fe:d3:e7 in network mk-ha-930506
	I1019 12:25:19.465863  163073 main.go:141] libmachine: (ha-930506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:d3:e7", ip: ""} in network mk-ha-930506: {Iface:virbr1 ExpiryTime:2025-10-19 13:19:17 +0000 UTC Type:0 Mac:52:54:00:fe:d3:e7 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-930506 Clientid:01:52:54:00:fe:d3:e7}
	I1019 12:25:19.465909  163073 main.go:141] libmachine: (ha-930506) DBG | domain ha-930506 has defined IP address 192.168.39.26 and MAC address 52:54:00:fe:d3:e7 in network mk-ha-930506
	I1019 12:25:19.466036  163073 host.go:66] Checking if "ha-930506" exists ...
	I1019 12:25:19.466403  163073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:25:19.466455  163073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:25:19.480840  163073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35835
	I1019 12:25:19.481357  163073 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:25:19.481907  163073 main.go:141] libmachine: Using API Version  1
	I1019 12:25:19.481929  163073 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:25:19.482323  163073 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:25:19.482518  163073 main.go:141] libmachine: (ha-930506) Calling .DriverName
	I1019 12:25:19.482739  163073 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:25:19.482787  163073 main.go:141] libmachine: (ha-930506) Calling .GetSSHHostname
	I1019 12:25:19.486480  163073 main.go:141] libmachine: (ha-930506) DBG | domain ha-930506 has defined MAC address 52:54:00:fe:d3:e7 in network mk-ha-930506
	I1019 12:25:19.486939  163073 main.go:141] libmachine: (ha-930506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:d3:e7", ip: ""} in network mk-ha-930506: {Iface:virbr1 ExpiryTime:2025-10-19 13:19:17 +0000 UTC Type:0 Mac:52:54:00:fe:d3:e7 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-930506 Clientid:01:52:54:00:fe:d3:e7}
	I1019 12:25:19.486971  163073 main.go:141] libmachine: (ha-930506) DBG | domain ha-930506 has defined IP address 192.168.39.26 and MAC address 52:54:00:fe:d3:e7 in network mk-ha-930506
	I1019 12:25:19.487114  163073 main.go:141] libmachine: (ha-930506) Calling .GetSSHPort
	I1019 12:25:19.487347  163073 main.go:141] libmachine: (ha-930506) Calling .GetSSHKeyPath
	I1019 12:25:19.487523  163073 main.go:141] libmachine: (ha-930506) Calling .GetSSHUsername
	I1019 12:25:19.487710  163073 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/ha-930506/id_rsa Username:docker}
	I1019 12:25:19.574310  163073 ssh_runner.go:195] Run: systemctl --version
	I1019 12:25:19.580877  163073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:25:19.596755  163073 kubeconfig.go:125] found "ha-930506" server: "https://192.168.39.254:8443"
	I1019 12:25:19.596805  163073 api_server.go:166] Checking apiserver status ...
	I1019 12:25:19.596846  163073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:25:19.621173  163073 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1413/cgroup
	W1019 12:25:19.636396  163073 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1413/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:25:19.636453  163073 ssh_runner.go:195] Run: ls
	I1019 12:25:19.642581  163073 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1019 12:25:19.647535  163073 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1019 12:25:19.647561  163073 status.go:463] ha-930506 apiserver status = Running (err=<nil>)
	I1019 12:25:19.647572  163073 status.go:176] ha-930506 status: &{Name:ha-930506 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:25:19.647589  163073 status.go:174] checking status of ha-930506-m02 ...
	I1019 12:25:19.647891  163073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:25:19.647931  163073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:25:19.662257  163073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I1019 12:25:19.662847  163073 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:25:19.663433  163073 main.go:141] libmachine: Using API Version  1
	I1019 12:25:19.663456  163073 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:25:19.663797  163073 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:25:19.664015  163073 main.go:141] libmachine: (ha-930506-m02) Calling .GetState
	I1019 12:25:19.665966  163073 status.go:371] ha-930506-m02 host status = "Stopped" (err=<nil>)
	I1019 12:25:19.665980  163073 status.go:384] host is not running, skipping remaining checks
	I1019 12:25:19.665988  163073 status.go:176] ha-930506-m02 status: &{Name:ha-930506-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:25:19.666022  163073 status.go:174] checking status of ha-930506-m03 ...
	I1019 12:25:19.666356  163073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:25:19.666398  163073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:25:19.681802  163073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37937
	I1019 12:25:19.682376  163073 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:25:19.682881  163073 main.go:141] libmachine: Using API Version  1
	I1019 12:25:19.682904  163073 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:25:19.683223  163073 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:25:19.683448  163073 main.go:141] libmachine: (ha-930506-m03) Calling .GetState
	I1019 12:25:19.685107  163073 status.go:371] ha-930506-m03 host status = "Running" (err=<nil>)
	I1019 12:25:19.685123  163073 host.go:66] Checking if "ha-930506-m03" exists ...
	I1019 12:25:19.685560  163073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:25:19.685610  163073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:25:19.700411  163073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42247
	I1019 12:25:19.701034  163073 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:25:19.701547  163073 main.go:141] libmachine: Using API Version  1
	I1019 12:25:19.701602  163073 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:25:19.701996  163073 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:25:19.702250  163073 main.go:141] libmachine: (ha-930506-m03) Calling .GetIP
	I1019 12:25:19.705414  163073 main.go:141] libmachine: (ha-930506-m03) DBG | domain ha-930506-m03 has defined MAC address 52:54:00:76:80:e4 in network mk-ha-930506
	I1019 12:25:19.705828  163073 main.go:141] libmachine: (ha-930506-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:80:e4", ip: ""} in network mk-ha-930506: {Iface:virbr1 ExpiryTime:2025-10-19 13:21:26 +0000 UTC Type:0 Mac:52:54:00:76:80:e4 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-930506-m03 Clientid:01:52:54:00:76:80:e4}
	I1019 12:25:19.705861  163073 main.go:141] libmachine: (ha-930506-m03) DBG | domain ha-930506-m03 has defined IP address 192.168.39.18 and MAC address 52:54:00:76:80:e4 in network mk-ha-930506
	I1019 12:25:19.706034  163073 host.go:66] Checking if "ha-930506-m03" exists ...
	I1019 12:25:19.706371  163073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:25:19.706421  163073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:25:19.720526  163073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I1019 12:25:19.721022  163073 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:25:19.721528  163073 main.go:141] libmachine: Using API Version  1
	I1019 12:25:19.721554  163073 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:25:19.721858  163073 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:25:19.722047  163073 main.go:141] libmachine: (ha-930506-m03) Calling .DriverName
	I1019 12:25:19.722212  163073 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:25:19.722231  163073 main.go:141] libmachine: (ha-930506-m03) Calling .GetSSHHostname
	I1019 12:25:19.726053  163073 main.go:141] libmachine: (ha-930506-m03) DBG | domain ha-930506-m03 has defined MAC address 52:54:00:76:80:e4 in network mk-ha-930506
	I1019 12:25:19.726516  163073 main.go:141] libmachine: (ha-930506-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:80:e4", ip: ""} in network mk-ha-930506: {Iface:virbr1 ExpiryTime:2025-10-19 13:21:26 +0000 UTC Type:0 Mac:52:54:00:76:80:e4 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-930506-m03 Clientid:01:52:54:00:76:80:e4}
	I1019 12:25:19.726565  163073 main.go:141] libmachine: (ha-930506-m03) DBG | domain ha-930506-m03 has defined IP address 192.168.39.18 and MAC address 52:54:00:76:80:e4 in network mk-ha-930506
	I1019 12:25:19.726710  163073 main.go:141] libmachine: (ha-930506-m03) Calling .GetSSHPort
	I1019 12:25:19.726897  163073 main.go:141] libmachine: (ha-930506-m03) Calling .GetSSHKeyPath
	I1019 12:25:19.727023  163073 main.go:141] libmachine: (ha-930506-m03) Calling .GetSSHUsername
	I1019 12:25:19.727203  163073 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/ha-930506-m03/id_rsa Username:docker}
	I1019 12:25:19.810577  163073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:25:19.830619  163073 kubeconfig.go:125] found "ha-930506" server: "https://192.168.39.254:8443"
	I1019 12:25:19.830657  163073 api_server.go:166] Checking apiserver status ...
	I1019 12:25:19.830707  163073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:25:19.849551  163073 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1892/cgroup
	W1019 12:25:19.860468  163073 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1892/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:25:19.860560  163073 ssh_runner.go:195] Run: ls
	I1019 12:25:19.865203  163073 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1019 12:25:19.870574  163073 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1019 12:25:19.870598  163073 status.go:463] ha-930506-m03 apiserver status = Running (err=<nil>)
	I1019 12:25:19.870607  163073 status.go:176] ha-930506-m03 status: &{Name:ha-930506-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:25:19.870623  163073 status.go:174] checking status of ha-930506-m04 ...
	I1019 12:25:19.870926  163073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:25:19.870968  163073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:25:19.886559  163073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34597
	I1019 12:25:19.887102  163073 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:25:19.887640  163073 main.go:141] libmachine: Using API Version  1
	I1019 12:25:19.887666  163073 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:25:19.888020  163073 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:25:19.888202  163073 main.go:141] libmachine: (ha-930506-m04) Calling .GetState
	I1019 12:25:19.890264  163073 status.go:371] ha-930506-m04 host status = "Running" (err=<nil>)
	I1019 12:25:19.890295  163073 host.go:66] Checking if "ha-930506-m04" exists ...
	I1019 12:25:19.890634  163073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:25:19.890672  163073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:25:19.904336  163073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35265
	I1019 12:25:19.904850  163073 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:25:19.905390  163073 main.go:141] libmachine: Using API Version  1
	I1019 12:25:19.905422  163073 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:25:19.905781  163073 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:25:19.905955  163073 main.go:141] libmachine: (ha-930506-m04) Calling .GetIP
	I1019 12:25:19.909164  163073 main.go:141] libmachine: (ha-930506-m04) DBG | domain ha-930506-m04 has defined MAC address 52:54:00:a2:a9:f7 in network mk-ha-930506
	I1019 12:25:19.909737  163073 main.go:141] libmachine: (ha-930506-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:a9:f7", ip: ""} in network mk-ha-930506: {Iface:virbr1 ExpiryTime:2025-10-19 13:23:07 +0000 UTC Type:0 Mac:52:54:00:a2:a9:f7 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-930506-m04 Clientid:01:52:54:00:a2:a9:f7}
	I1019 12:25:19.909763  163073 main.go:141] libmachine: (ha-930506-m04) DBG | domain ha-930506-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:a2:a9:f7 in network mk-ha-930506
	I1019 12:25:19.909973  163073 host.go:66] Checking if "ha-930506-m04" exists ...
	I1019 12:25:19.910305  163073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:25:19.910350  163073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:25:19.925391  163073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37593
	I1019 12:25:19.925892  163073 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:25:19.926351  163073 main.go:141] libmachine: Using API Version  1
	I1019 12:25:19.926370  163073 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:25:19.926675  163073 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:25:19.926858  163073 main.go:141] libmachine: (ha-930506-m04) Calling .DriverName
	I1019 12:25:19.927031  163073 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:25:19.927067  163073 main.go:141] libmachine: (ha-930506-m04) Calling .GetSSHHostname
	I1019 12:25:19.930545  163073 main.go:141] libmachine: (ha-930506-m04) DBG | domain ha-930506-m04 has defined MAC address 52:54:00:a2:a9:f7 in network mk-ha-930506
	I1019 12:25:19.931087  163073 main.go:141] libmachine: (ha-930506-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:a9:f7", ip: ""} in network mk-ha-930506: {Iface:virbr1 ExpiryTime:2025-10-19 13:23:07 +0000 UTC Type:0 Mac:52:54:00:a2:a9:f7 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-930506-m04 Clientid:01:52:54:00:a2:a9:f7}
	I1019 12:25:19.931111  163073 main.go:141] libmachine: (ha-930506-m04) DBG | domain ha-930506-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:a2:a9:f7 in network mk-ha-930506
	I1019 12:25:19.931350  163073 main.go:141] libmachine: (ha-930506-m04) Calling .GetSSHPort
	I1019 12:25:19.931548  163073 main.go:141] libmachine: (ha-930506-m04) Calling .GetSSHKeyPath
	I1019 12:25:19.931693  163073 main.go:141] libmachine: (ha-930506-m04) Calling .GetSSHUsername
	I1019 12:25:19.931837  163073 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/ha-930506-m04/id_rsa Username:docker}
	I1019 12:25:20.015929  163073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:25:20.033099  163073 status.go:176] ha-930506-m04 status: &{Name:ha-930506-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (90.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (34.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 node start m02 --alsologtostderr -v 5
E1019 12:25:53.465225  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-930506 node start m02 --alsologtostderr -v 5: (33.804147068s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-930506 status --alsologtostderr -v 5: (1.013343354s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (34.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.086162615s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (301.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 stop --alsologtostderr -v 5
E1019 12:28:09.603544  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:28:37.307159  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-930506 stop --alsologtostderr -v 5: (3m0.638809255s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 start --wait true --alsologtostderr -v 5
E1019 12:30:02.177138  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-930506 start --wait true --alsologtostderr -v 5: (2m0.878851958s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (301.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-930506 node delete m03 --alsologtostderr -v 5: (18.148623239s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (241.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 stop --alsologtostderr -v 5
E1019 12:31:25.247956  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:33:09.607707  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:35:02.175695  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-930506 stop --alsologtostderr -v 5: (4m1.19471461s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-930506 status --alsologtostderr -v 5: exit status 7 (113.030559ms)

                                                
                                                
-- stdout --
	ha-930506
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-930506-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-930506-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:35:19.169482  166636 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:35:19.169772  166636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:35:19.169781  166636 out.go:374] Setting ErrFile to fd 2...
	I1019 12:35:19.169784  166636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:35:19.169973  166636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-144655/.minikube/bin
	I1019 12:35:19.170170  166636 out.go:368] Setting JSON to false
	I1019 12:35:19.170201  166636 mustload.go:65] Loading cluster: ha-930506
	I1019 12:35:19.170323  166636 notify.go:220] Checking for updates...
	I1019 12:35:19.170642  166636 config.go:182] Loaded profile config "ha-930506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:35:19.170665  166636 status.go:174] checking status of ha-930506 ...
	I1019 12:35:19.171067  166636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:35:19.171105  166636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:35:19.192837  166636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45177
	I1019 12:35:19.193399  166636 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:35:19.194024  166636 main.go:141] libmachine: Using API Version  1
	I1019 12:35:19.194049  166636 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:35:19.194408  166636 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:35:19.194611  166636 main.go:141] libmachine: (ha-930506) Calling .GetState
	I1019 12:35:19.196273  166636 status.go:371] ha-930506 host status = "Stopped" (err=<nil>)
	I1019 12:35:19.196302  166636 status.go:384] host is not running, skipping remaining checks
	I1019 12:35:19.196310  166636 status.go:176] ha-930506 status: &{Name:ha-930506 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:35:19.196344  166636 status.go:174] checking status of ha-930506-m02 ...
	I1019 12:35:19.196777  166636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:35:19.196823  166636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:35:19.210676  166636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39691
	I1019 12:35:19.211160  166636 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:35:19.211651  166636 main.go:141] libmachine: Using API Version  1
	I1019 12:35:19.211673  166636 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:35:19.212010  166636 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:35:19.212180  166636 main.go:141] libmachine: (ha-930506-m02) Calling .GetState
	I1019 12:35:19.213937  166636 status.go:371] ha-930506-m02 host status = "Stopped" (err=<nil>)
	I1019 12:35:19.213953  166636 status.go:384] host is not running, skipping remaining checks
	I1019 12:35:19.213969  166636 status.go:176] ha-930506-m02 status: &{Name:ha-930506-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:35:19.213988  166636 status.go:174] checking status of ha-930506-m04 ...
	I1019 12:35:19.214262  166636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:35:19.214325  166636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:35:19.227689  166636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36595
	I1019 12:35:19.228074  166636 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:35:19.228547  166636 main.go:141] libmachine: Using API Version  1
	I1019 12:35:19.228570  166636 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:35:19.228908  166636 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:35:19.229109  166636 main.go:141] libmachine: (ha-930506-m04) Calling .GetState
	I1019 12:35:19.230769  166636 status.go:371] ha-930506-m04 host status = "Stopped" (err=<nil>)
	I1019 12:35:19.230781  166636 status.go:384] host is not running, skipping remaining checks
	I1019 12:35:19.230786  166636 status.go:176] ha-930506-m04 status: &{Name:ha-930506-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (241.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (103.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-930506 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m42.280776248s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (103.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (89.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 node add --control-plane --alsologtostderr -v 5
E1019 12:38:09.608590  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-930506 node add --control-plane --alsologtostderr -v 5: (1m29.070803264s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-930506 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (89.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.57s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-396599 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1019 12:39:32.668886  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-396599 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m18.572464069s)
--- PASS: TestJSONOutput/start/Command (78.57s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-396599 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-396599 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.94s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-396599 --output=json --user=testUser
E1019 12:40:02.177197  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-396599 --output=json --user=testUser: (6.935587816s)
--- PASS: TestJSONOutput/stop/Command (6.94s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-266835 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-266835 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (61.583898ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"56b54dcc-4b25-4788-9f13-57794be051fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-266835] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc81c8c6-96a9-46b3-8b47-cfa9ee90e9e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21772"}}
	{"specversion":"1.0","id":"1a346ca0-a200-44f1-a952-463421496da8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"41aa043c-aafc-4386-a28b-13f75128b88e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21772-144655/kubeconfig"}}
	{"specversion":"1.0","id":"93bc663f-a790-4d87-91a1-5a19f4fdb8e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-144655/.minikube"}}
	{"specversion":"1.0","id":"31d233e8-a083-4c70-9d60-a79634f4e389","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"bee5a96e-17e5-4132-8c86-ce9170fd0177","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"12133b4a-785b-48c9-a6cb-3e4453be068c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-266835" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-266835
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (79.56s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-062576 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-062576 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (35.089764867s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-065500 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-065500 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.681523454s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-062576
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-065500
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-065500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-065500
helpers_test.go:175: Cleaning up "first-062576" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-062576
--- PASS: TestMinikubeProfile (79.56s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-623419 --memory=3072 --mount-string /tmp/TestMountStartserial4166276214/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-623419 --memory=3072 --mount-string /tmp/TestMountStartserial4166276214/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (19.212772952s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-623419 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-623419 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-636331 --memory=3072 --mount-string /tmp/TestMountStartserial4166276214/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-636331 --memory=3072 --mount-string /tmp/TestMountStartserial4166276214/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (23.013513578s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-636331 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-636331 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-623419 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-636331 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-636331 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-636331
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-636331: (1.188880494s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.69s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-636331
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-636331: (18.687538425s)
--- PASS: TestMountStart/serial/RestartStopped (19.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-636331 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-636331 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (98.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-875731 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1019 12:43:09.602724  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-875731 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m37.816996353s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (98.24s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-875731 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-875731 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-875731 -- rollout status deployment/busybox: (4.662079193s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-875731 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-875731 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-875731 -- exec busybox-7b57f96db7-cwdxq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-875731 -- exec busybox-7b57f96db7-mh9vb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-875731 -- exec busybox-7b57f96db7-cwdxq -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-875731 -- exec busybox-7b57f96db7-mh9vb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-875731 -- exec busybox-7b57f96db7-cwdxq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-875731 -- exec busybox-7b57f96db7-mh9vb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.15s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-875731 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-875731 -- exec busybox-7b57f96db7-cwdxq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-875731 -- exec busybox-7b57f96db7-cwdxq -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-875731 -- exec busybox-7b57f96db7-mh9vb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-875731 -- exec busybox-7b57f96db7-mh9vb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-875731 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-875731 -v=5 --alsologtostderr: (42.927123487s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.47s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-875731 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 cp testdata/cp-test.txt multinode-875731:/home/docker/cp-test.txt
E1019 12:45:02.175624  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 ssh -n multinode-875731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 cp multinode-875731:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2287556416/001/cp-test_multinode-875731.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 ssh -n multinode-875731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 cp multinode-875731:/home/docker/cp-test.txt multinode-875731-m02:/home/docker/cp-test_multinode-875731_multinode-875731-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 ssh -n multinode-875731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 ssh -n multinode-875731-m02 "sudo cat /home/docker/cp-test_multinode-875731_multinode-875731-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 cp multinode-875731:/home/docker/cp-test.txt multinode-875731-m03:/home/docker/cp-test_multinode-875731_multinode-875731-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 ssh -n multinode-875731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 ssh -n multinode-875731-m03 "sudo cat /home/docker/cp-test_multinode-875731_multinode-875731-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 cp testdata/cp-test.txt multinode-875731-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 ssh -n multinode-875731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 cp multinode-875731-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2287556416/001/cp-test_multinode-875731-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 ssh -n multinode-875731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 cp multinode-875731-m02:/home/docker/cp-test.txt multinode-875731:/home/docker/cp-test_multinode-875731-m02_multinode-875731.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 ssh -n multinode-875731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 ssh -n multinode-875731 "sudo cat /home/docker/cp-test_multinode-875731-m02_multinode-875731.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 cp multinode-875731-m02:/home/docker/cp-test.txt multinode-875731-m03:/home/docker/cp-test_multinode-875731-m02_multinode-875731-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 ssh -n multinode-875731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 ssh -n multinode-875731-m03 "sudo cat /home/docker/cp-test_multinode-875731-m02_multinode-875731-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 cp testdata/cp-test.txt multinode-875731-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 ssh -n multinode-875731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 cp multinode-875731-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2287556416/001/cp-test_multinode-875731-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 ssh -n multinode-875731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 cp multinode-875731-m03:/home/docker/cp-test.txt multinode-875731:/home/docker/cp-test_multinode-875731-m03_multinode-875731.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 ssh -n multinode-875731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 ssh -n multinode-875731 "sudo cat /home/docker/cp-test_multinode-875731-m03_multinode-875731.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 cp multinode-875731-m03:/home/docker/cp-test.txt multinode-875731-m02:/home/docker/cp-test_multinode-875731-m03_multinode-875731-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 ssh -n multinode-875731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 ssh -n multinode-875731-m02 "sudo cat /home/docker/cp-test_multinode-875731-m03_multinode-875731-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-875731 node stop m03: (1.848883449s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-875731 status: exit status 7 (423.957862ms)

                                                
                                                
-- stdout --
	multinode-875731
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-875731-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-875731-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-875731 status --alsologtostderr: exit status 7 (412.165972ms)

                                                
                                                
-- stdout --
	multinode-875731
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-875731-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-875731-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:45:11.094087  174386 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:45:11.094178  174386 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:45:11.094186  174386 out.go:374] Setting ErrFile to fd 2...
	I1019 12:45:11.094189  174386 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:45:11.094370  174386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-144655/.minikube/bin
	I1019 12:45:11.094517  174386 out.go:368] Setting JSON to false
	I1019 12:45:11.094538  174386 mustload.go:65] Loading cluster: multinode-875731
	I1019 12:45:11.094569  174386 notify.go:220] Checking for updates...
	I1019 12:45:11.094912  174386 config.go:182] Loaded profile config "multinode-875731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:45:11.094928  174386 status.go:174] checking status of multinode-875731 ...
	I1019 12:45:11.095334  174386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:45:11.095398  174386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:45:11.109495  174386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I1019 12:45:11.109960  174386 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:45:11.110502  174386 main.go:141] libmachine: Using API Version  1
	I1019 12:45:11.110524  174386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:45:11.110867  174386 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:45:11.111046  174386 main.go:141] libmachine: (multinode-875731) Calling .GetState
	I1019 12:45:11.112908  174386 status.go:371] multinode-875731 host status = "Running" (err=<nil>)
	I1019 12:45:11.112927  174386 host.go:66] Checking if "multinode-875731" exists ...
	I1019 12:45:11.113228  174386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:45:11.113264  174386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:45:11.127202  174386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40069
	I1019 12:45:11.127544  174386 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:45:11.127942  174386 main.go:141] libmachine: Using API Version  1
	I1019 12:45:11.127964  174386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:45:11.128273  174386 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:45:11.128482  174386 main.go:141] libmachine: (multinode-875731) Calling .GetIP
	I1019 12:45:11.131031  174386 main.go:141] libmachine: (multinode-875731) DBG | domain multinode-875731 has defined MAC address 52:54:00:62:68:ae in network mk-multinode-875731
	I1019 12:45:11.131409  174386 main.go:141] libmachine: (multinode-875731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:68:ae", ip: ""} in network mk-multinode-875731: {Iface:virbr1 ExpiryTime:2025-10-19 13:42:47 +0000 UTC Type:0 Mac:52:54:00:62:68:ae Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:multinode-875731 Clientid:01:52:54:00:62:68:ae}
	I1019 12:45:11.131453  174386 main.go:141] libmachine: (multinode-875731) DBG | domain multinode-875731 has defined IP address 192.168.39.84 and MAC address 52:54:00:62:68:ae in network mk-multinode-875731
	I1019 12:45:11.131587  174386 host.go:66] Checking if "multinode-875731" exists ...
	I1019 12:45:11.131856  174386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:45:11.131892  174386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:45:11.144899  174386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42803
	I1019 12:45:11.145316  174386 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:45:11.145718  174386 main.go:141] libmachine: Using API Version  1
	I1019 12:45:11.145735  174386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:45:11.146021  174386 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:45:11.146204  174386 main.go:141] libmachine: (multinode-875731) Calling .DriverName
	I1019 12:45:11.146399  174386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:45:11.146421  174386 main.go:141] libmachine: (multinode-875731) Calling .GetSSHHostname
	I1019 12:45:11.149326  174386 main.go:141] libmachine: (multinode-875731) DBG | domain multinode-875731 has defined MAC address 52:54:00:62:68:ae in network mk-multinode-875731
	I1019 12:45:11.149766  174386 main.go:141] libmachine: (multinode-875731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:68:ae", ip: ""} in network mk-multinode-875731: {Iface:virbr1 ExpiryTime:2025-10-19 13:42:47 +0000 UTC Type:0 Mac:52:54:00:62:68:ae Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:multinode-875731 Clientid:01:52:54:00:62:68:ae}
	I1019 12:45:11.149795  174386 main.go:141] libmachine: (multinode-875731) DBG | domain multinode-875731 has defined IP address 192.168.39.84 and MAC address 52:54:00:62:68:ae in network mk-multinode-875731
	I1019 12:45:11.149939  174386 main.go:141] libmachine: (multinode-875731) Calling .GetSSHPort
	I1019 12:45:11.150122  174386 main.go:141] libmachine: (multinode-875731) Calling .GetSSHKeyPath
	I1019 12:45:11.150294  174386 main.go:141] libmachine: (multinode-875731) Calling .GetSSHUsername
	I1019 12:45:11.150422  174386 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/multinode-875731/id_rsa Username:docker}
	I1019 12:45:11.226703  174386 ssh_runner.go:195] Run: systemctl --version
	I1019 12:45:11.232447  174386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:45:11.248316  174386 kubeconfig.go:125] found "multinode-875731" server: "https://192.168.39.84:8443"
	I1019 12:45:11.248356  174386 api_server.go:166] Checking apiserver status ...
	I1019 12:45:11.248397  174386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:45:11.266739  174386 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1381/cgroup
	W1019 12:45:11.277742  174386 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1381/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:45:11.277811  174386 ssh_runner.go:195] Run: ls
	I1019 12:45:11.282929  174386 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I1019 12:45:11.288494  174386 api_server.go:279] https://192.168.39.84:8443/healthz returned 200:
	ok
	I1019 12:45:11.288517  174386 status.go:463] multinode-875731 apiserver status = Running (err=<nil>)
	I1019 12:45:11.288532  174386 status.go:176] multinode-875731 status: &{Name:multinode-875731 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:45:11.288558  174386 status.go:174] checking status of multinode-875731-m02 ...
	I1019 12:45:11.288852  174386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:45:11.288897  174386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:45:11.302918  174386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I1019 12:45:11.303354  174386 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:45:11.303846  174386 main.go:141] libmachine: Using API Version  1
	I1019 12:45:11.303866  174386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:45:11.304186  174386 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:45:11.304428  174386 main.go:141] libmachine: (multinode-875731-m02) Calling .GetState
	I1019 12:45:11.305983  174386 status.go:371] multinode-875731-m02 host status = "Running" (err=<nil>)
	I1019 12:45:11.306000  174386 host.go:66] Checking if "multinode-875731-m02" exists ...
	I1019 12:45:11.306312  174386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:45:11.306355  174386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:45:11.319350  174386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I1019 12:45:11.319792  174386 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:45:11.320205  174386 main.go:141] libmachine: Using API Version  1
	I1019 12:45:11.320236  174386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:45:11.320539  174386 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:45:11.320718  174386 main.go:141] libmachine: (multinode-875731-m02) Calling .GetIP
	I1019 12:45:11.323273  174386 main.go:141] libmachine: (multinode-875731-m02) DBG | domain multinode-875731-m02 has defined MAC address 52:54:00:66:6b:3b in network mk-multinode-875731
	I1019 12:45:11.323778  174386 main.go:141] libmachine: (multinode-875731-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:6b:3b", ip: ""} in network mk-multinode-875731: {Iface:virbr1 ExpiryTime:2025-10-19 13:43:40 +0000 UTC Type:0 Mac:52:54:00:66:6b:3b Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-875731-m02 Clientid:01:52:54:00:66:6b:3b}
	I1019 12:45:11.323805  174386 main.go:141] libmachine: (multinode-875731-m02) DBG | domain multinode-875731-m02 has defined IP address 192.168.39.206 and MAC address 52:54:00:66:6b:3b in network mk-multinode-875731
	I1019 12:45:11.323972  174386 host.go:66] Checking if "multinode-875731-m02" exists ...
	I1019 12:45:11.324384  174386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:45:11.324429  174386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:45:11.337417  174386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44351
	I1019 12:45:11.337832  174386 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:45:11.338276  174386 main.go:141] libmachine: Using API Version  1
	I1019 12:45:11.338317  174386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:45:11.338630  174386 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:45:11.338807  174386 main.go:141] libmachine: (multinode-875731-m02) Calling .DriverName
	I1019 12:45:11.338956  174386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:45:11.338980  174386 main.go:141] libmachine: (multinode-875731-m02) Calling .GetSSHHostname
	I1019 12:45:11.341978  174386 main.go:141] libmachine: (multinode-875731-m02) DBG | domain multinode-875731-m02 has defined MAC address 52:54:00:66:6b:3b in network mk-multinode-875731
	I1019 12:45:11.342448  174386 main.go:141] libmachine: (multinode-875731-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:6b:3b", ip: ""} in network mk-multinode-875731: {Iface:virbr1 ExpiryTime:2025-10-19 13:43:40 +0000 UTC Type:0 Mac:52:54:00:66:6b:3b Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-875731-m02 Clientid:01:52:54:00:66:6b:3b}
	I1019 12:45:11.342492  174386 main.go:141] libmachine: (multinode-875731-m02) DBG | domain multinode-875731-m02 has defined IP address 192.168.39.206 and MAC address 52:54:00:66:6b:3b in network mk-multinode-875731
	I1019 12:45:11.342651  174386 main.go:141] libmachine: (multinode-875731-m02) Calling .GetSSHPort
	I1019 12:45:11.342795  174386 main.go:141] libmachine: (multinode-875731-m02) Calling .GetSSHKeyPath
	I1019 12:45:11.342975  174386 main.go:141] libmachine: (multinode-875731-m02) Calling .GetSSHUsername
	I1019 12:45:11.343105  174386 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-144655/.minikube/machines/multinode-875731-m02/id_rsa Username:docker}
	I1019 12:45:11.425574  174386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:45:11.442009  174386 status.go:176] multinode-875731-m02 status: &{Name:multinode-875731-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:45:11.442048  174386 status.go:174] checking status of multinode-875731-m03 ...
	I1019 12:45:11.442490  174386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:45:11.442540  174386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:45:11.456560  174386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39165
	I1019 12:45:11.457004  174386 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:45:11.457482  174386 main.go:141] libmachine: Using API Version  1
	I1019 12:45:11.457503  174386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:45:11.457822  174386 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:45:11.457975  174386 main.go:141] libmachine: (multinode-875731-m03) Calling .GetState
	I1019 12:45:11.459543  174386 status.go:371] multinode-875731-m03 host status = "Stopped" (err=<nil>)
	I1019 12:45:11.459559  174386 status.go:384] host is not running, skipping remaining checks
	I1019 12:45:11.459567  174386 status.go:176] multinode-875731-m03 status: &{Name:multinode-875731-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.69s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-875731 node start m03 -v=5 --alsologtostderr: (36.0532551s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (292.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-875731
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-875731
E1019 12:48:05.251770  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:48:09.606513  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-875731: (2m46.565116314s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-875731 --wait=true -v=5 --alsologtostderr
E1019 12:50:02.176195  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-875731 --wait=true -v=5 --alsologtostderr: (2m5.686590519s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-875731
--- PASS: TestMultiNode/serial/RestartKeepsNodes (292.35s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-875731 node delete m03: (2.281998012s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (148.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 stop
E1019 12:53:09.602669  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-875731 stop: (2m28.577749747s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-875731 status: exit status 7 (100.103613ms)

                                                
                                                
-- stdout --
	multinode-875731
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-875731-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-875731 status --alsologtostderr: exit status 7 (98.997565ms)

                                                
                                                
-- stdout --
	multinode-875731
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-875731-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:53:12.025425  177093 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:53:12.025660  177093 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:12.025668  177093 out.go:374] Setting ErrFile to fd 2...
	I1019 12:53:12.025672  177093 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:12.025850  177093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-144655/.minikube/bin
	I1019 12:53:12.026027  177093 out.go:368] Setting JSON to false
	I1019 12:53:12.026053  177093 mustload.go:65] Loading cluster: multinode-875731
	I1019 12:53:12.026126  177093 notify.go:220] Checking for updates...
	I1019 12:53:12.026416  177093 config.go:182] Loaded profile config "multinode-875731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:12.026435  177093 status.go:174] checking status of multinode-875731 ...
	I1019 12:53:12.026866  177093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:53:12.026904  177093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:53:12.048666  177093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33943
	I1019 12:53:12.049257  177093 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:53:12.049915  177093 main.go:141] libmachine: Using API Version  1
	I1019 12:53:12.049950  177093 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:53:12.050411  177093 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:53:12.050685  177093 main.go:141] libmachine: (multinode-875731) Calling .GetState
	I1019 12:53:12.052375  177093 status.go:371] multinode-875731 host status = "Stopped" (err=<nil>)
	I1019 12:53:12.052396  177093 status.go:384] host is not running, skipping remaining checks
	I1019 12:53:12.052410  177093 status.go:176] multinode-875731 status: &{Name:multinode-875731 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:53:12.052448  177093 status.go:174] checking status of multinode-875731-m02 ...
	I1019 12:53:12.052949  177093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 12:53:12.053010  177093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1019 12:53:12.068799  177093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42981
	I1019 12:53:12.069383  177093 main.go:141] libmachine: () Calling .GetVersion
	I1019 12:53:12.069916  177093 main.go:141] libmachine: Using API Version  1
	I1019 12:53:12.069937  177093 main.go:141] libmachine: () Calling .SetConfigRaw
	I1019 12:53:12.070700  177093 main.go:141] libmachine: () Calling .GetMachineName
	I1019 12:53:12.070937  177093 main.go:141] libmachine: (multinode-875731-m02) Calling .GetState
	I1019 12:53:12.072948  177093 status.go:371] multinode-875731-m02 host status = "Stopped" (err=<nil>)
	I1019 12:53:12.072969  177093 status.go:384] host is not running, skipping remaining checks
	I1019 12:53:12.072977  177093 status.go:176] multinode-875731-m02 status: &{Name:multinode-875731-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (148.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (117.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-875731 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1019 12:55:02.176018  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-875731 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m57.405877541s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-875731 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (117.97s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-875731
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-875731-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-875731-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (68.493499ms)

                                                
                                                
-- stdout --
	* [multinode-875731-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-144655/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-144655/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-875731-m02' is duplicated with machine name 'multinode-875731-m02' in profile 'multinode-875731'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-875731-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-875731-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.460857479s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-875731
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-875731: exit status 80 (230.660252ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-875731 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-875731-m03 already exists in multinode-875731-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-875731-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.67s)

                                                
                                    
x
+
TestScheduledStopUnix (112.61s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-407449 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-407449 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.790417942s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-407449 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-407449 -n scheduled-stop-407449
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-407449 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1019 12:59:20.291976  148701 retry.go:31] will retry after 123.251µs: open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/scheduled-stop-407449/pid: no such file or directory
I1019 12:59:20.293172  148701 retry.go:31] will retry after 145.167µs: open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/scheduled-stop-407449/pid: no such file or directory
I1019 12:59:20.294335  148701 retry.go:31] will retry after 337.08µs: open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/scheduled-stop-407449/pid: no such file or directory
I1019 12:59:20.295432  148701 retry.go:31] will retry after 466.161µs: open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/scheduled-stop-407449/pid: no such file or directory
I1019 12:59:20.296585  148701 retry.go:31] will retry after 255.154µs: open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/scheduled-stop-407449/pid: no such file or directory
I1019 12:59:20.297736  148701 retry.go:31] will retry after 667.873µs: open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/scheduled-stop-407449/pid: no such file or directory
I1019 12:59:20.298889  148701 retry.go:31] will retry after 738.514µs: open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/scheduled-stop-407449/pid: no such file or directory
I1019 12:59:20.300036  148701 retry.go:31] will retry after 1.905518ms: open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/scheduled-stop-407449/pid: no such file or directory
I1019 12:59:20.302252  148701 retry.go:31] will retry after 2.12916ms: open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/scheduled-stop-407449/pid: no such file or directory
I1019 12:59:20.305467  148701 retry.go:31] will retry after 2.010147ms: open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/scheduled-stop-407449/pid: no such file or directory
I1019 12:59:20.307719  148701 retry.go:31] will retry after 8.473037ms: open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/scheduled-stop-407449/pid: no such file or directory
I1019 12:59:20.316954  148701 retry.go:31] will retry after 5.889413ms: open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/scheduled-stop-407449/pid: no such file or directory
I1019 12:59:20.323197  148701 retry.go:31] will retry after 11.036313ms: open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/scheduled-stop-407449/pid: no such file or directory
I1019 12:59:20.334374  148701 retry.go:31] will retry after 24.149766ms: open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/scheduled-stop-407449/pid: no such file or directory
I1019 12:59:20.359649  148701 retry.go:31] will retry after 19.132398ms: open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/scheduled-stop-407449/pid: no such file or directory
I1019 12:59:20.379955  148701 retry.go:31] will retry after 26.375205ms: open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/scheduled-stop-407449/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-407449 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-407449 -n scheduled-stop-407449
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-407449
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-407449 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1019 13:00:02.175663  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-407449
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-407449: exit status 7 (78.745101ms)

                                                
                                                
-- stdout --
	scheduled-stop-407449
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-407449 -n scheduled-stop-407449
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-407449 -n scheduled-stop-407449: exit status 7 (79.372772ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-407449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-407449
--- PASS: TestScheduledStopUnix (112.61s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (146s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3666164319 start -p running-upgrade-485921 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3666164319 start -p running-upgrade-485921 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m29.888867248s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-485921 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-485921 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (52.602603227s)
helpers_test.go:175: Cleaning up "running-upgrade-485921" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-485921
--- PASS: TestRunningBinaryUpgrade (146.00s)

                                                
                                    
x
+
TestKubernetesUpgrade (201.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-511839 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-511839 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (58.719825753s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-511839
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-511839: (1.839054324s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-511839 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-511839 status --format={{.Host}}: exit status 7 (79.077097ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-511839 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-511839 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m3.044242539s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-511839 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-511839 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-511839 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (90.951608ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-511839] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-144655/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-144655/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-511839
	    minikube start -p kubernetes-upgrade-511839 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5118392 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-511839 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-511839 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-511839 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m16.707683135s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-511839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-511839
--- PASS: TestKubernetesUpgrade (201.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-725214 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-725214 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (83.918918ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-725214] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-144655/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-144655/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (85.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-725214 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-725214 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.38637482s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-725214 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (85.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-422995 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-422995 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (114.011941ms)

                                                
                                                
-- stdout --
	* [false-422995] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-144655/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-144655/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 13:00:35.078762  181536 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:00:35.078873  181536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:00:35.078882  181536 out.go:374] Setting ErrFile to fd 2...
	I1019 13:00:35.078888  181536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:00:35.079102  181536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-144655/.minikube/bin
	I1019 13:00:35.079690  181536 out.go:368] Setting JSON to false
	I1019 13:00:35.080659  181536 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6169,"bootTime":1760872666,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 13:00:35.080778  181536 start.go:141] virtualization: kvm guest
	I1019 13:00:35.082727  181536 out.go:179] * [false-422995] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 13:00:35.083956  181536 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:00:35.083970  181536 notify.go:220] Checking for updates...
	I1019 13:00:35.086032  181536 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:00:35.087115  181536 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-144655/kubeconfig
	I1019 13:00:35.088157  181536 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-144655/.minikube
	I1019 13:00:35.089193  181536 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 13:00:35.090219  181536 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:00:35.091751  181536 config.go:182] Loaded profile config "NoKubernetes-725214": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:00:35.091857  181536 config.go:182] Loaded profile config "force-systemd-env-773419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:00:35.091965  181536 config.go:182] Loaded profile config "offline-crio-686849": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:00:35.092073  181536 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:00:35.127947  181536 out.go:179] * Using the kvm2 driver based on user configuration
	I1019 13:00:35.128934  181536 start.go:305] selected driver: kvm2
	I1019 13:00:35.128953  181536 start.go:925] validating driver "kvm2" against <nil>
	I1019 13:00:35.128968  181536 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:00:35.130890  181536 out.go:203] 
	W1019 13:00:35.132069  181536 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1019 13:00:35.133081  181536 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-422995 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-422995

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-422995

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-422995

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-422995

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-422995

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-422995

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-422995

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-422995

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-422995

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-422995

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-422995

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-422995" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-422995" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-422995

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422995"

                                                
                                                
----------------------- debugLogs end: false-422995 [took: 3.050353485s] --------------------------------
helpers_test.go:175: Cleaning up "false-422995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-422995
--- PASS: TestNetworkPlugins/group/false (3.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (31.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-725214 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-725214 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (30.476065033s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-725214 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-725214 status -o json: exit status 2 (278.908888ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-725214","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-725214
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (31.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (9.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (9.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (115.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.460063700 start -p stopped-upgrade-913890 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.460063700 start -p stopped-upgrade-913890 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (57.093587917s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.460063700 -p stopped-upgrade-913890 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.460063700 -p stopped-upgrade-913890 stop: (1.728240383s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-913890 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-913890 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (56.26426495s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (115.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (45.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-725214 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-725214 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (45.164446748s)
--- PASS: TestNoKubernetes/serial/Start (45.16s)

                                                
                                    
x
+
TestPause/serial/Start (116.57s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-969331 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1019 13:03:09.603037  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-969331 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m56.566063152s)
--- PASS: TestPause/serial/Start (116.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-725214 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-725214 "sudo systemctl is-active --quiet service kubelet": exit status 1 (203.814607ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (8.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (5.254653053s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.393111266s)
--- PASS: TestNoKubernetes/serial/ProfileList (8.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-725214
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-725214: (1.222941719s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (57.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-725214 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-725214 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (57.174004759s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (57.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-913890
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-913890: (1.237602994s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-725214 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-725214 "sudo systemctl is-active --quiet service kubelet": exit status 1 (208.971801ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (89.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-422995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-422995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m29.341080794s)
--- PASS: TestNetworkPlugins/group/auto/Start (89.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (63.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-422995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-422995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m3.494423919s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (63.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-5cgmw" [99829e9c-f7b3-492d-85ae-60e56ae9116d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003962739s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-422995 "pgrep -a kubelet"
I1019 13:06:59.355719  148701 config.go:182] Loaded profile config "auto-422995": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-422995 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-62td2" [ee655cf0-539b-403d-881b-28d06600132a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-62td2" [ee655cf0-539b-403d-881b-28d06600132a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004133911s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-422995 "pgrep -a kubelet"
I1019 13:07:00.178855  148701 config.go:182] Loaded profile config "kindnet-422995": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-422995 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qplt9" [4c57a471-0eaa-4abc-9aee-34d110f8293e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qplt9" [4c57a471-0eaa-4abc-9aee-34d110f8293e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004242763s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-422995 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-422995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-422995 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-422995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-422995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-422995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-422995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-422995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m7.215675806s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (90.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-422995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-422995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m30.260861625s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (90.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-5kzzv" [91ff94ee-909d-43b0-a56e-9bfa6baca08d] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-5kzzv" [91ff94ee-909d-43b0-a56e-9bfa6baca08d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004987115s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-422995 "pgrep -a kubelet"
I1019 13:08:40.705605  148701 config.go:182] Loaded profile config "calico-422995": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-422995 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-r9gpw" [79f9d36d-870d-405c-afc5-f8cfd36c86a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-r9gpw" [79f9d36d-870d-405c-afc5-f8cfd36c86a2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004322564s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-422995 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-422995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-422995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-422995 "pgrep -a kubelet"
I1019 13:08:57.933750  148701 config.go:182] Loaded profile config "custom-flannel-422995": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-422995 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dvckn" [3609a12f-0dc7-4b49-b330-35424c03bc79] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dvckn" [3609a12f-0dc7-4b49-b330-35424c03bc79] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004669849s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-422995 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-422995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-422995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (50.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-422995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-422995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (50.708339274s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (50.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-422995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-422995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m11.182690325s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-422995 "pgrep -a kubelet"
I1019 13:09:59.885142  148701 config.go:182] Loaded profile config "enable-default-cni-422995": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-422995 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5crg7" [6c263095-7787-428e-b650-0cef5d9f8fc0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1019 13:10:02.175262  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-5crg7" [6c263095-7787-428e-b650-0cef5d9f8fc0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004133376s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-422995 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-422995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-422995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (81.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-422995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-422995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m21.227298442s)
--- PASS: TestNetworkPlugins/group/bridge/Start (81.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-km47h" [c5ab4923-43e0-4382-ae89-9fefa474059f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004076019s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-422995 "pgrep -a kubelet"
I1019 13:10:42.337179  148701 config.go:182] Loaded profile config "flannel-422995": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-422995 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t776b" [9f428145-c8d1-43e4-ba95-a98f282e62ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t776b" [9f428145-c8d1-43e4-ba95-a98f282e62ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004982317s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-422995 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-422995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-422995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (96.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-725412 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-725412 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m36.593067446s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (96.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (107.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-446116 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-446116 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m47.992269763s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (107.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-422995 "pgrep -a kubelet"
I1019 13:11:48.597435  148701 config.go:182] Loaded profile config "bridge-422995": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-422995 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5wpk4" [d7a068d6-db28-4524-beab-5244d041a3f7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5wpk4" [d7a068d6-db28-4524-beab-5244d041a3f7] Running
E1019 13:11:53.961467  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/kindnet-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:11:53.967908  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/kindnet-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:11:53.979391  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/kindnet-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:11:54.001342  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/kindnet-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:11:54.042813  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/kindnet-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:11:54.124423  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/kindnet-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:11:54.286730  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/kindnet-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:11:54.608664  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/kindnet-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:11:55.250599  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/kindnet-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:11:56.532942  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/kindnet-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:11:59.094422  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/kindnet-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:11:59.578749  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/auto-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:11:59.585229  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/auto-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:11:59.596625  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/auto-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:11:59.618260  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/auto-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:11:59.659748  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/auto-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:11:59.741449  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/auto-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003433919s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (92.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-522966 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-522966 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m32.983400868s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (92.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-422995 exec deployment/netcat -- nslookup kubernetes.default
E1019 13:11:59.903016  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/auto-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-422995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-422995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1019 13:12:00.225387  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/auto-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-257575 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1019 13:12:20.074116  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/auto-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:12:34.939977  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/kindnet-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:12:40.555970  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/auto-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-257575 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m23.999874759s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-725412 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b40fb1e1-0250-4665-b58c-0c0c6ea19dba] Pending
helpers_test.go:352: "busybox" [b40fb1e1-0250-4665-b58c-0c0c6ea19dba] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b40fb1e1-0250-4665-b58c-0c0c6ea19dba] Running
E1019 13:12:52.672005  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.003409467s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-725412 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-725412 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-725412 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (90.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-725412 --alsologtostderr -v=3
E1019 13:13:09.603106  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/functional-789160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-725412 --alsologtostderr -v=3: (1m30.024989632s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (90.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-446116 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9c4e593c-a688-415e-8dd1-5bce4ea07395] Pending
helpers_test.go:352: "busybox" [9c4e593c-a688-415e-8dd1-5bce4ea07395] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1019 13:13:15.902238  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/kindnet-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [9c4e593c-a688-415e-8dd1-5bce4ea07395] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003711125s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-446116 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-446116 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1019 13:13:21.518160  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/auto-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-446116 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (82.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-446116 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-446116 --alsologtostderr -v=3: (1m22.393516523s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (82.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-522966 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3b4e67ec-06c0-4949-aa50-e9e8f3acdb4c] Pending
helpers_test.go:352: "busybox" [3b4e67ec-06c0-4949-aa50-e9e8f3acdb4c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3b4e67ec-06c0-4949-aa50-e9e8f3acdb4c] Running
E1019 13:13:34.478523  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/calico-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:13:34.484899  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/calico-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:13:34.496302  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/calico-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:13:34.517769  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/calico-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:13:34.559237  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/calico-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:13:34.640691  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/calico-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:13:34.802348  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/calico-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:13:35.123980  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/calico-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.005151021s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-522966 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-522966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1019 13:13:35.765410  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/calico-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-522966 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (73.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-522966 --alsologtostderr -v=3
E1019 13:13:37.047026  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/calico-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:13:39.608473  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/calico-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-522966 --alsologtostderr -v=3: (1m13.24823529s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (73.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-257575 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7edb4ae3-d02c-45e0-bf40-4a09b4e5689e] Pending
helpers_test.go:352: "busybox" [7edb4ae3-d02c-45e0-bf40-4a09b4e5689e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1019 13:13:44.730215  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/calico-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [7edb4ae3-d02c-45e0-bf40-4a09b4e5689e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00315717s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-257575 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-257575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-257575 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (88.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-257575 --alsologtostderr -v=3
E1019 13:13:54.972421  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/calico-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:13:58.160987  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/custom-flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:13:58.167346  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/custom-flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:13:58.178689  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/custom-flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:13:58.199974  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/custom-flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:13:58.241327  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/custom-flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:13:58.322749  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/custom-flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:13:58.484318  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/custom-flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:13:58.806071  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/custom-flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:13:59.447497  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/custom-flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:14:00.729540  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/custom-flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:14:03.291234  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/custom-flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:14:08.413000  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/custom-flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:14:15.454223  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/calico-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:14:18.655123  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/custom-flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-257575 --alsologtostderr -v=3: (1m28.106227642s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (88.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-725412 -n old-k8s-version-725412
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-725412 -n old-k8s-version-725412: exit status 7 (67.460566ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-725412 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-725412 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
E1019 13:14:37.823952  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/kindnet-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:14:39.137139  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/custom-flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:14:43.440694  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/auto-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-725412 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (45.552781868s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-725412 -n old-k8s-version-725412
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-446116 -n no-preload-446116
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-446116 -n no-preload-446116: exit status 7 (65.476794ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-446116 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (65.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-446116 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-446116 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m5.185040547s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-446116 -n no-preload-446116
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (65.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-522966 -n embed-certs-522966
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-522966 -n embed-certs-522966: exit status 7 (78.078192ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-522966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (59.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-522966 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1019 13:14:56.416056  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/calico-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:15:00.110358  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/enable-default-cni-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:15:00.116763  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/enable-default-cni-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:15:00.128274  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/enable-default-cni-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:15:00.149850  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/enable-default-cni-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:15:00.192062  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/enable-default-cni-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:15:00.273621  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/enable-default-cni-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:15:00.435156  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/enable-default-cni-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:15:00.756948  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/enable-default-cni-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:15:01.399122  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/enable-default-cni-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:15:02.175400  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/addons-360741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:15:02.681140  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/enable-default-cni-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:15:05.243185  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/enable-default-cni-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:15:10.365568  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/enable-default-cni-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-522966 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (59.06198973s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-522966 -n embed-certs-522966
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (59.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-dpsjl" [2970cec3-6de7-47ab-822e-ca9a689df8c9] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-dpsjl" [2970cec3-6de7-47ab-822e-ca9a689df8c9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.004744492s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-257575 -n default-k8s-diff-port-257575
E1019 13:15:20.099012  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/custom-flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-257575 -n default-k8s-diff-port-257575: exit status 7 (83.196294ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-257575 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-257575 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1019 13:15:20.607116  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/enable-default-cni-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-257575 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (49.582310317s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-257575 -n default-k8s-diff-port-257575
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-dpsjl" [2970cec3-6de7-47ab-822e-ca9a689df8c9] Running
E1019 13:15:36.114521  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:15:36.121671  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:15:36.133810  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:15:36.155496  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:15:36.197644  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:15:36.279796  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:15:36.441411  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004814727s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-725412 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-725412 image list --format=json
E1019 13:15:36.763270  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-725412 --alsologtostderr -v=1
E1019 13:15:37.405308  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-725412 --alsologtostderr -v=1: (1.123260987s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-725412 -n old-k8s-version-725412
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-725412 -n old-k8s-version-725412: exit status 2 (313.537628ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-725412 -n old-k8s-version-725412
E1019 13:15:38.686638  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-725412 -n old-k8s-version-725412: exit status 2 (340.609617ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-725412 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-725412 -n old-k8s-version-725412
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-725412 -n old-k8s-version-725412
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-277450 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1019 13:15:46.371003  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-277450 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (46.723331955s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-c8jzs" [5144e742-294c-4149-92ad-7771f92f62b1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-c8jzs" [5144e742-294c-4149-92ad-7771f92f62b1] Running
E1019 13:15:56.612967  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.008537287s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4xkcc" [4748d3a5-ce7c-4b31-9f04-591fd5af65b2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4xkcc" [4748d3a5-ce7c-4b31-9f04-591fd5af65b2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.005346469s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-c8jzs" [5144e742-294c-4149-92ad-7771f92f62b1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004099278s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-522966 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-522966 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-522966 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-522966 -n embed-certs-522966
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-522966 -n embed-certs-522966: exit status 2 (280.379269ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-522966 -n embed-certs-522966
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-522966 -n embed-certs-522966: exit status 2 (296.791755ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-522966 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-522966 --alsologtostderr -v=1: (1.08464466s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-522966 -n embed-certs-522966
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-522966 -n embed-certs-522966
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4xkcc" [4748d3a5-ce7c-4b31-9f04-591fd5af65b2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.056972222s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-446116 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-446116 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-446116 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-446116 --alsologtostderr -v=1: (1.382192783s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-446116 -n no-preload-446116
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-446116 -n no-preload-446116: exit status 2 (359.233295ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-446116 -n no-preload-446116
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-446116 -n no-preload-446116: exit status 2 (304.846424ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-446116 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-446116 -n no-preload-446116
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-446116 -n no-preload-446116
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bctqk" [22ba8525-ac75-4308-96f5-8d284ee994ca] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bctqk" [22ba8525-ac75-4308-96f5-8d284ee994ca] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.003692513s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bctqk" [22ba8525-ac75-4308-96f5-8d284ee994ca] Running
E1019 13:16:18.337858  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/calico-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:16:22.051518  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/enable-default-cni-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00463145s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-257575 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-257575 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-257575 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-257575 -n default-k8s-diff-port-257575
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-257575 -n default-k8s-diff-port-257575: exit status 2 (275.107492ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-257575 -n default-k8s-diff-port-257575
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-257575 -n default-k8s-diff-port-257575: exit status 2 (263.088719ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-257575 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-257575 -n default-k8s-diff-port-257575
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-257575 -n default-k8s-diff-port-257575
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-277450 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-277450 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-277450 --alsologtostderr -v=3: (10.747297079s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-277450 -n newest-cni-277450
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-277450 -n newest-cni-277450: exit status 7 (64.919076ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-277450 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (33.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-277450 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1019 13:16:42.021495  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/custom-flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:16:48.813777  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/bridge-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:16:48.820239  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/bridge-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:16:48.831650  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/bridge-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:16:48.853094  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/bridge-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:16:48.894555  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/bridge-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:16:48.976058  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/bridge-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:16:49.137656  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/bridge-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:16:49.459399  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/bridge-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:16:50.100804  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/bridge-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:16:51.382991  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/bridge-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:16:53.945502  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/bridge-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:16:53.960960  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/kindnet-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:16:58.056888  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/flannel-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:16:59.067514  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/bridge-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:16:59.579296  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/auto-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:17:09.309561  148701 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-144655/.minikube/profiles/bridge-422995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-277450 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (33.24111669s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-277450 -n newest-cni-277450
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (33.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-277450 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-277450 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-277450 --alsologtostderr -v=1: (1.750388504s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-277450 -n newest-cni-277450
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-277450 -n newest-cni-277450: exit status 2 (300.349282ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-277450 -n newest-cni-277450
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-277450 -n newest-cni-277450: exit status 2 (267.479212ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-277450 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-277450 -n newest-cni-277450
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-277450 -n newest-cni-277450
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.83s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.28
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
146 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
147 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
148 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
149 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
150 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
151 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
152 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
258 TestNetworkPlugins/group/kubenet 3.3
267 TestNetworkPlugins/group/cilium 3.52
277 TestStartStop/group/disable-driver-mounts 0.19
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-360741 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-422995 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-422995

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-422995

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-422995

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-422995

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-422995

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-422995

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-422995

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-422995

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-422995

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-422995

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-422995

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-422995" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-422995" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-422995

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422995"

                                                
                                                
----------------------- debugLogs end: kubenet-422995 [took: 3.115769949s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-422995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-422995
--- SKIP: TestNetworkPlugins/group/kubenet (3.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-422995 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-422995

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-422995

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-422995

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-422995

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-422995

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-422995

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-422995

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-422995

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-422995

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-422995

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-422995

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-422995" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-422995

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-422995

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-422995

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-422995

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-422995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-422995" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-422995

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-422995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422995"

                                                
                                                
----------------------- debugLogs end: cilium-422995 [took: 3.350871707s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-422995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-422995
--- SKIP: TestNetworkPlugins/group/cilium (3.52s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-549635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-549635
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard