Test Report: KVM_Linux_crio 21738

                    
                      0f64f31b8846d8060cae128a3e5be9cc35c08ea3:2025-10-16:41932
                    
                

Test fail (3/324)

Order failed test Duration
37 TestAddons/parallel/Ingress 157.85
244 TestPreload 168.76
286 TestPause/serial/SecondStartNoReconfiguration 64.96
x
+
TestAddons/parallel/Ingress (157.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-019580 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-019580 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-019580 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [9d772ea5-6e5c-457a-a18c-fd5017516390] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [9d772ea5-6e5c-457a-a18c-fd5017516390] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.005880824s
I1016 17:48:33.568219   12767 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-019580 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-019580 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.568897447s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-019580 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-019580 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-019580 -n addons-019580
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-019580 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-019580 logs -n 25: (1.337721176s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-762056                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-762056 │ jenkins │ v1.37.0 │ 16 Oct 25 17:44 UTC │ 16 Oct 25 17:44 UTC │
	│ start   │ --download-only -p binary-mirror-778089 --alsologtostderr --binary-mirror http://127.0.0.1:37567 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-778089 │ jenkins │ v1.37.0 │ 16 Oct 25 17:44 UTC │                     │
	│ delete  │ -p binary-mirror-778089                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-778089 │ jenkins │ v1.37.0 │ 16 Oct 25 17:44 UTC │ 16 Oct 25 17:44 UTC │
	│ addons  │ disable dashboard -p addons-019580                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:44 UTC │                     │
	│ addons  │ enable dashboard -p addons-019580                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:44 UTC │                     │
	│ start   │ -p addons-019580 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:44 UTC │ 16 Oct 25 17:47 UTC │
	│ addons  │ addons-019580 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:47 UTC │ 16 Oct 25 17:47 UTC │
	│ addons  │ addons-019580 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:48 UTC │ 16 Oct 25 17:48 UTC │
	│ addons  │ addons-019580 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:48 UTC │ 16 Oct 25 17:48 UTC │
	│ addons  │ addons-019580 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:48 UTC │ 16 Oct 25 17:48 UTC │
	│ addons  │ addons-019580 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:48 UTC │ 16 Oct 25 17:48 UTC │
	│ addons  │ addons-019580 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:48 UTC │ 16 Oct 25 17:48 UTC │
	│ addons  │ enable headlamp -p addons-019580 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:48 UTC │ 16 Oct 25 17:48 UTC │
	│ ip      │ addons-019580 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:48 UTC │ 16 Oct 25 17:48 UTC │
	│ addons  │ addons-019580 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:48 UTC │ 16 Oct 25 17:48 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-019580                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:48 UTC │ 16 Oct 25 17:48 UTC │
	│ addons  │ addons-019580 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:48 UTC │ 16 Oct 25 17:48 UTC │
	│ ssh     │ addons-019580 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:48 UTC │                     │
	│ addons  │ addons-019580 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:48 UTC │ 16 Oct 25 17:48 UTC │
	│ addons  │ addons-019580 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:48 UTC │ 16 Oct 25 17:48 UTC │
	│ ssh     │ addons-019580 ssh cat /opt/local-path-provisioner/pvc-9fdcc576-35c5-4162-a0f5-167380d6b2ab_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:48 UTC │ 16 Oct 25 17:48 UTC │
	│ addons  │ addons-019580 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:48 UTC │ 16 Oct 25 17:49 UTC │
	│ addons  │ addons-019580 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:49 UTC │ 16 Oct 25 17:49 UTC │
	│ addons  │ addons-019580 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:49 UTC │ 16 Oct 25 17:49 UTC │
	│ ip      │ addons-019580 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-019580        │ jenkins │ v1.37.0 │ 16 Oct 25 17:50 UTC │ 16 Oct 25 17:50 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 17:44:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 17:44:27.845768   13479 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:44:27.845865   13479 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:44:27.845872   13479 out.go:374] Setting ErrFile to fd 2...
	I1016 17:44:27.845879   13479 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:44:27.846051   13479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8816/.minikube/bin
	I1016 17:44:27.846592   13479 out.go:368] Setting JSON to false
	I1016 17:44:27.847502   13479 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1606,"bootTime":1760635062,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 17:44:27.847593   13479 start.go:141] virtualization: kvm guest
	I1016 17:44:27.849449   13479 out.go:179] * [addons-019580] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 17:44:27.850690   13479 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 17:44:27.850685   13479 notify.go:220] Checking for updates...
	I1016 17:44:27.853038   13479 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 17:44:27.854198   13479 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8816/kubeconfig
	I1016 17:44:27.855284   13479 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8816/.minikube
	I1016 17:44:27.856388   13479 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 17:44:27.857403   13479 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 17:44:27.858791   13479 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 17:44:27.888476   13479 out.go:179] * Using the kvm2 driver based on user configuration
	I1016 17:44:27.889675   13479 start.go:305] selected driver: kvm2
	I1016 17:44:27.889694   13479 start.go:925] validating driver "kvm2" against <nil>
	I1016 17:44:27.889706   13479 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 17:44:27.890430   13479 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 17:44:27.890510   13479 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21738-8816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1016 17:44:27.904182   13479 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1016 17:44:27.904212   13479 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21738-8816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1016 17:44:27.917477   13479 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1016 17:44:27.917522   13479 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 17:44:27.917791   13479 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 17:44:27.917829   13479 cni.go:84] Creating CNI manager for ""
	I1016 17:44:27.917876   13479 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1016 17:44:27.917885   13479 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1016 17:44:27.917938   13479 start.go:349] cluster config:
	{Name:addons-019580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-019580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1016 17:44:27.918060   13479 iso.go:125] acquiring lock: {Name:mke23fa091b5b2529e94c2fba7379020f81892c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 17:44:27.920510   13479 out.go:179] * Starting "addons-019580" primary control-plane node in "addons-019580" cluster
	I1016 17:44:27.921642   13479 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 17:44:27.921683   13479 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1016 17:44:27.921695   13479 cache.go:58] Caching tarball of preloaded images
	I1016 17:44:27.921808   13479 preload.go:233] Found /home/jenkins/minikube-integration/21738-8816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1016 17:44:27.921822   13479 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 17:44:27.922111   13479 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/config.json ...
	I1016 17:44:27.922152   13479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/config.json: {Name:mk7dd8ab97881ca706b54ed7444062a90e8e9353 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:27.922307   13479 start.go:360] acquireMachinesLock for addons-019580: {Name:mkfc8a48414152b8c16845fb35ed65ca3f42bae5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1016 17:44:27.922371   13479 start.go:364] duration metric: took 46.084µs to acquireMachinesLock for "addons-019580"
	I1016 17:44:27.922396   13479 start.go:93] Provisioning new machine with config: &{Name:addons-019580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-019580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 17:44:27.922462   13479 start.go:125] createHost starting for "" (driver="kvm2")
	I1016 17:44:27.923951   13479 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1016 17:44:27.924063   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:44:27.924101   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:44:27.936749   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46751
	I1016 17:44:27.937251   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:44:27.937717   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:44:27.937738   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:44:27.938088   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:44:27.938306   13479 main.go:141] libmachine: (addons-019580) Calling .GetMachineName
	I1016 17:44:27.938431   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:44:27.938538   13479 start.go:159] libmachine.API.Create for "addons-019580" (driver="kvm2")
	I1016 17:44:27.938571   13479 client.go:168] LocalClient.Create starting
	I1016 17:44:27.938617   13479 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca.pem
	I1016 17:44:28.590883   13479 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/cert.pem
	I1016 17:44:28.668571   13479 main.go:141] libmachine: Running pre-create checks...
	I1016 17:44:28.668593   13479 main.go:141] libmachine: (addons-019580) Calling .PreCreateCheck
	I1016 17:44:28.669073   13479 main.go:141] libmachine: (addons-019580) Calling .GetConfigRaw
	I1016 17:44:28.669428   13479 main.go:141] libmachine: Creating machine...
	I1016 17:44:28.669442   13479 main.go:141] libmachine: (addons-019580) Calling .Create
	I1016 17:44:28.669591   13479 main.go:141] libmachine: (addons-019580) creating domain...
	I1016 17:44:28.669610   13479 main.go:141] libmachine: (addons-019580) creating network...
	I1016 17:44:28.671037   13479 main.go:141] libmachine: (addons-019580) DBG | found existing default network
	I1016 17:44:28.671212   13479 main.go:141] libmachine: (addons-019580) DBG | <network>
	I1016 17:44:28.671221   13479 main.go:141] libmachine: (addons-019580) DBG |   <name>default</name>
	I1016 17:44:28.671228   13479 main.go:141] libmachine: (addons-019580) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1016 17:44:28.671236   13479 main.go:141] libmachine: (addons-019580) DBG |   <forward mode='nat'>
	I1016 17:44:28.671255   13479 main.go:141] libmachine: (addons-019580) DBG |     <nat>
	I1016 17:44:28.671272   13479 main.go:141] libmachine: (addons-019580) DBG |       <port start='1024' end='65535'/>
	I1016 17:44:28.671282   13479 main.go:141] libmachine: (addons-019580) DBG |     </nat>
	I1016 17:44:28.671290   13479 main.go:141] libmachine: (addons-019580) DBG |   </forward>
	I1016 17:44:28.671297   13479 main.go:141] libmachine: (addons-019580) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1016 17:44:28.671301   13479 main.go:141] libmachine: (addons-019580) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1016 17:44:28.671307   13479 main.go:141] libmachine: (addons-019580) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1016 17:44:28.671312   13479 main.go:141] libmachine: (addons-019580) DBG |     <dhcp>
	I1016 17:44:28.671318   13479 main.go:141] libmachine: (addons-019580) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1016 17:44:28.671322   13479 main.go:141] libmachine: (addons-019580) DBG |     </dhcp>
	I1016 17:44:28.671335   13479 main.go:141] libmachine: (addons-019580) DBG |   </ip>
	I1016 17:44:28.671342   13479 main.go:141] libmachine: (addons-019580) DBG | </network>
	I1016 17:44:28.671350   13479 main.go:141] libmachine: (addons-019580) DBG | 
	I1016 17:44:28.671981   13479 main.go:141] libmachine: (addons-019580) DBG | I1016 17:44:28.671819   13507 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000136b0}
	I1016 17:44:28.671997   13479 main.go:141] libmachine: (addons-019580) DBG | defining private network:
	I1016 17:44:28.672007   13479 main.go:141] libmachine: (addons-019580) DBG | 
	I1016 17:44:28.672015   13479 main.go:141] libmachine: (addons-019580) DBG | <network>
	I1016 17:44:28.672024   13479 main.go:141] libmachine: (addons-019580) DBG |   <name>mk-addons-019580</name>
	I1016 17:44:28.672035   13479 main.go:141] libmachine: (addons-019580) DBG |   <dns enable='no'/>
	I1016 17:44:28.672046   13479 main.go:141] libmachine: (addons-019580) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1016 17:44:28.672058   13479 main.go:141] libmachine: (addons-019580) DBG |     <dhcp>
	I1016 17:44:28.672067   13479 main.go:141] libmachine: (addons-019580) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1016 17:44:28.672073   13479 main.go:141] libmachine: (addons-019580) DBG |     </dhcp>
	I1016 17:44:28.672089   13479 main.go:141] libmachine: (addons-019580) DBG |   </ip>
	I1016 17:44:28.672104   13479 main.go:141] libmachine: (addons-019580) DBG | </network>
	I1016 17:44:28.672114   13479 main.go:141] libmachine: (addons-019580) DBG | 
	I1016 17:44:28.677748   13479 main.go:141] libmachine: (addons-019580) DBG | creating private network mk-addons-019580 192.168.39.0/24...
	I1016 17:44:28.743932   13479 main.go:141] libmachine: (addons-019580) DBG | private network mk-addons-019580 192.168.39.0/24 created
	I1016 17:44:28.744217   13479 main.go:141] libmachine: (addons-019580) DBG | <network>
	I1016 17:44:28.744241   13479 main.go:141] libmachine: (addons-019580) DBG |   <name>mk-addons-019580</name>
	I1016 17:44:28.744254   13479 main.go:141] libmachine: (addons-019580) setting up store path in /home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580 ...
	I1016 17:44:28.744276   13479 main.go:141] libmachine: (addons-019580) building disk image from file:///home/jenkins/minikube-integration/21738-8816/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1016 17:44:28.744291   13479 main.go:141] libmachine: (addons-019580) DBG |   <uuid>f5fdaf98-bb26-4157-a51f-de2c43d2f1ce</uuid>
	I1016 17:44:28.744302   13479 main.go:141] libmachine: (addons-019580) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1016 17:44:28.744313   13479 main.go:141] libmachine: (addons-019580) DBG |   <mac address='52:54:00:f9:be:0b'/>
	I1016 17:44:28.744322   13479 main.go:141] libmachine: (addons-019580) DBG |   <dns enable='no'/>
	I1016 17:44:28.744328   13479 main.go:141] libmachine: (addons-019580) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1016 17:44:28.744334   13479 main.go:141] libmachine: (addons-019580) DBG |     <dhcp>
	I1016 17:44:28.744361   13479 main.go:141] libmachine: (addons-019580) Downloading /home/jenkins/minikube-integration/21738-8816/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21738-8816/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1016 17:44:28.744373   13479 main.go:141] libmachine: (addons-019580) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1016 17:44:28.744388   13479 main.go:141] libmachine: (addons-019580) DBG |     </dhcp>
	I1016 17:44:28.744394   13479 main.go:141] libmachine: (addons-019580) DBG |   </ip>
	I1016 17:44:28.744405   13479 main.go:141] libmachine: (addons-019580) DBG | </network>
	I1016 17:44:28.744414   13479 main.go:141] libmachine: (addons-019580) DBG | 
	I1016 17:44:28.744443   13479 main.go:141] libmachine: (addons-019580) DBG | I1016 17:44:28.744225   13507 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21738-8816/.minikube
	I1016 17:44:29.019014   13479 main.go:141] libmachine: (addons-019580) DBG | I1016 17:44:29.018856   13507 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa...
	I1016 17:44:29.790897   13479 main.go:141] libmachine: (addons-019580) DBG | I1016 17:44:29.790746   13507 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/addons-019580.rawdisk...
	I1016 17:44:29.790917   13479 main.go:141] libmachine: (addons-019580) DBG | Writing magic tar header
	I1016 17:44:29.790945   13479 main.go:141] libmachine: (addons-019580) DBG | Writing SSH key tar header
	I1016 17:44:29.790953   13479 main.go:141] libmachine: (addons-019580) DBG | I1016 17:44:29.790867   13507 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580 ...
	I1016 17:44:29.790991   13479 main.go:141] libmachine: (addons-019580) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580
	I1016 17:44:29.791169   13479 main.go:141] libmachine: (addons-019580) setting executable bit set on /home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580 (perms=drwx------)
	I1016 17:44:29.791190   13479 main.go:141] libmachine: (addons-019580) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21738-8816/.minikube/machines
	I1016 17:44:29.791201   13479 main.go:141] libmachine: (addons-019580) setting executable bit set on /home/jenkins/minikube-integration/21738-8816/.minikube/machines (perms=drwxr-xr-x)
	I1016 17:44:29.791217   13479 main.go:141] libmachine: (addons-019580) setting executable bit set on /home/jenkins/minikube-integration/21738-8816/.minikube (perms=drwxr-xr-x)
	I1016 17:44:29.791242   13479 main.go:141] libmachine: (addons-019580) setting executable bit set on /home/jenkins/minikube-integration/21738-8816 (perms=drwxrwxr-x)
	I1016 17:44:29.791257   13479 main.go:141] libmachine: (addons-019580) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21738-8816/.minikube
	I1016 17:44:29.791267   13479 main.go:141] libmachine: (addons-019580) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1016 17:44:29.791279   13479 main.go:141] libmachine: (addons-019580) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1016 17:44:29.791289   13479 main.go:141] libmachine: (addons-019580) defining domain...
	I1016 17:44:29.791306   13479 main.go:141] libmachine: (addons-019580) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21738-8816
	I1016 17:44:29.791322   13479 main.go:141] libmachine: (addons-019580) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1016 17:44:29.791345   13479 main.go:141] libmachine: (addons-019580) DBG | checking permissions on dir: /home/jenkins
	I1016 17:44:29.791374   13479 main.go:141] libmachine: (addons-019580) DBG | checking permissions on dir: /home
	I1016 17:44:29.791384   13479 main.go:141] libmachine: (addons-019580) DBG | skipping /home - not owner
	I1016 17:44:29.792338   13479 main.go:141] libmachine: (addons-019580) defining domain using XML: 
	I1016 17:44:29.792360   13479 main.go:141] libmachine: (addons-019580) <domain type='kvm'>
	I1016 17:44:29.792370   13479 main.go:141] libmachine: (addons-019580)   <name>addons-019580</name>
	I1016 17:44:29.792378   13479 main.go:141] libmachine: (addons-019580)   <memory unit='MiB'>4096</memory>
	I1016 17:44:29.792386   13479 main.go:141] libmachine: (addons-019580)   <vcpu>2</vcpu>
	I1016 17:44:29.792395   13479 main.go:141] libmachine: (addons-019580)   <features>
	I1016 17:44:29.792402   13479 main.go:141] libmachine: (addons-019580)     <acpi/>
	I1016 17:44:29.792406   13479 main.go:141] libmachine: (addons-019580)     <apic/>
	I1016 17:44:29.792410   13479 main.go:141] libmachine: (addons-019580)     <pae/>
	I1016 17:44:29.792415   13479 main.go:141] libmachine: (addons-019580)   </features>
	I1016 17:44:29.792419   13479 main.go:141] libmachine: (addons-019580)   <cpu mode='host-passthrough'>
	I1016 17:44:29.792423   13479 main.go:141] libmachine: (addons-019580)   </cpu>
	I1016 17:44:29.792430   13479 main.go:141] libmachine: (addons-019580)   <os>
	I1016 17:44:29.792440   13479 main.go:141] libmachine: (addons-019580)     <type>hvm</type>
	I1016 17:44:29.792474   13479 main.go:141] libmachine: (addons-019580)     <boot dev='cdrom'/>
	I1016 17:44:29.792495   13479 main.go:141] libmachine: (addons-019580)     <boot dev='hd'/>
	I1016 17:44:29.792507   13479 main.go:141] libmachine: (addons-019580)     <bootmenu enable='no'/>
	I1016 17:44:29.792518   13479 main.go:141] libmachine: (addons-019580)   </os>
	I1016 17:44:29.792527   13479 main.go:141] libmachine: (addons-019580)   <devices>
	I1016 17:44:29.792535   13479 main.go:141] libmachine: (addons-019580)     <disk type='file' device='cdrom'>
	I1016 17:44:29.792549   13479 main.go:141] libmachine: (addons-019580)       <source file='/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/boot2docker.iso'/>
	I1016 17:44:29.792560   13479 main.go:141] libmachine: (addons-019580)       <target dev='hdc' bus='scsi'/>
	I1016 17:44:29.792569   13479 main.go:141] libmachine: (addons-019580)       <readonly/>
	I1016 17:44:29.792576   13479 main.go:141] libmachine: (addons-019580)     </disk>
	I1016 17:44:29.792585   13479 main.go:141] libmachine: (addons-019580)     <disk type='file' device='disk'>
	I1016 17:44:29.792600   13479 main.go:141] libmachine: (addons-019580)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1016 17:44:29.792624   13479 main.go:141] libmachine: (addons-019580)       <source file='/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/addons-019580.rawdisk'/>
	I1016 17:44:29.792636   13479 main.go:141] libmachine: (addons-019580)       <target dev='hda' bus='virtio'/>
	I1016 17:44:29.792645   13479 main.go:141] libmachine: (addons-019580)     </disk>
	I1016 17:44:29.792655   13479 main.go:141] libmachine: (addons-019580)     <interface type='network'>
	I1016 17:44:29.792665   13479 main.go:141] libmachine: (addons-019580)       <source network='mk-addons-019580'/>
	I1016 17:44:29.792675   13479 main.go:141] libmachine: (addons-019580)       <model type='virtio'/>
	I1016 17:44:29.792700   13479 main.go:141] libmachine: (addons-019580)     </interface>
	I1016 17:44:29.792723   13479 main.go:141] libmachine: (addons-019580)     <interface type='network'>
	I1016 17:44:29.792735   13479 main.go:141] libmachine: (addons-019580)       <source network='default'/>
	I1016 17:44:29.792749   13479 main.go:141] libmachine: (addons-019580)       <model type='virtio'/>
	I1016 17:44:29.792773   13479 main.go:141] libmachine: (addons-019580)     </interface>
	I1016 17:44:29.792796   13479 main.go:141] libmachine: (addons-019580)     <serial type='pty'>
	I1016 17:44:29.792809   13479 main.go:141] libmachine: (addons-019580)       <target port='0'/>
	I1016 17:44:29.792826   13479 main.go:141] libmachine: (addons-019580)     </serial>
	I1016 17:44:29.792837   13479 main.go:141] libmachine: (addons-019580)     <console type='pty'>
	I1016 17:44:29.792847   13479 main.go:141] libmachine: (addons-019580)       <target type='serial' port='0'/>
	I1016 17:44:29.792854   13479 main.go:141] libmachine: (addons-019580)     </console>
	I1016 17:44:29.792865   13479 main.go:141] libmachine: (addons-019580)     <rng model='virtio'>
	I1016 17:44:29.792877   13479 main.go:141] libmachine: (addons-019580)       <backend model='random'>/dev/random</backend>
	I1016 17:44:29.792889   13479 main.go:141] libmachine: (addons-019580)     </rng>
	I1016 17:44:29.792905   13479 main.go:141] libmachine: (addons-019580)   </devices>
	I1016 17:44:29.792914   13479 main.go:141] libmachine: (addons-019580) </domain>
	I1016 17:44:29.792936   13479 main.go:141] libmachine: (addons-019580) 
	I1016 17:44:29.799555   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:09:7c:8b in network default
	I1016 17:44:29.800166   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:29.800186   13479 main.go:141] libmachine: (addons-019580) starting domain...
	I1016 17:44:29.800211   13479 main.go:141] libmachine: (addons-019580) ensuring networks are active...
	I1016 17:44:29.801164   13479 main.go:141] libmachine: (addons-019580) Ensuring network default is active
	I1016 17:44:29.801516   13479 main.go:141] libmachine: (addons-019580) Ensuring network mk-addons-019580 is active
	I1016 17:44:29.802184   13479 main.go:141] libmachine: (addons-019580) getting domain XML...
	I1016 17:44:29.803188   13479 main.go:141] libmachine: (addons-019580) DBG | starting domain XML:
	I1016 17:44:29.803211   13479 main.go:141] libmachine: (addons-019580) DBG | <domain type='kvm'>
	I1016 17:44:29.803221   13479 main.go:141] libmachine: (addons-019580) DBG |   <name>addons-019580</name>
	I1016 17:44:29.803229   13479 main.go:141] libmachine: (addons-019580) DBG |   <uuid>718eea9d-eb61-4b5e-8ace-d4ee8245b513</uuid>
	I1016 17:44:29.803240   13479 main.go:141] libmachine: (addons-019580) DBG |   <memory unit='KiB'>4194304</memory>
	I1016 17:44:29.803248   13479 main.go:141] libmachine: (addons-019580) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1016 17:44:29.803254   13479 main.go:141] libmachine: (addons-019580) DBG |   <vcpu placement='static'>2</vcpu>
	I1016 17:44:29.803258   13479 main.go:141] libmachine: (addons-019580) DBG |   <os>
	I1016 17:44:29.803267   13479 main.go:141] libmachine: (addons-019580) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1016 17:44:29.803278   13479 main.go:141] libmachine: (addons-019580) DBG |     <boot dev='cdrom'/>
	I1016 17:44:29.803288   13479 main.go:141] libmachine: (addons-019580) DBG |     <boot dev='hd'/>
	I1016 17:44:29.803300   13479 main.go:141] libmachine: (addons-019580) DBG |     <bootmenu enable='no'/>
	I1016 17:44:29.803310   13479 main.go:141] libmachine: (addons-019580) DBG |   </os>
	I1016 17:44:29.803319   13479 main.go:141] libmachine: (addons-019580) DBG |   <features>
	I1016 17:44:29.803323   13479 main.go:141] libmachine: (addons-019580) DBG |     <acpi/>
	I1016 17:44:29.803330   13479 main.go:141] libmachine: (addons-019580) DBG |     <apic/>
	I1016 17:44:29.803334   13479 main.go:141] libmachine: (addons-019580) DBG |     <pae/>
	I1016 17:44:29.803338   13479 main.go:141] libmachine: (addons-019580) DBG |   </features>
	I1016 17:44:29.803346   13479 main.go:141] libmachine: (addons-019580) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1016 17:44:29.803351   13479 main.go:141] libmachine: (addons-019580) DBG |   <clock offset='utc'/>
	I1016 17:44:29.803362   13479 main.go:141] libmachine: (addons-019580) DBG |   <on_poweroff>destroy</on_poweroff>
	I1016 17:44:29.803387   13479 main.go:141] libmachine: (addons-019580) DBG |   <on_reboot>restart</on_reboot>
	I1016 17:44:29.803430   13479 main.go:141] libmachine: (addons-019580) DBG |   <on_crash>destroy</on_crash>
	I1016 17:44:29.803445   13479 main.go:141] libmachine: (addons-019580) DBG |   <devices>
	I1016 17:44:29.803457   13479 main.go:141] libmachine: (addons-019580) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1016 17:44:29.803469   13479 main.go:141] libmachine: (addons-019580) DBG |     <disk type='file' device='cdrom'>
	I1016 17:44:29.803478   13479 main.go:141] libmachine: (addons-019580) DBG |       <driver name='qemu' type='raw'/>
	I1016 17:44:29.803495   13479 main.go:141] libmachine: (addons-019580) DBG |       <source file='/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/boot2docker.iso'/>
	I1016 17:44:29.803510   13479 main.go:141] libmachine: (addons-019580) DBG |       <target dev='hdc' bus='scsi'/>
	I1016 17:44:29.803523   13479 main.go:141] libmachine: (addons-019580) DBG |       <readonly/>
	I1016 17:44:29.803535   13479 main.go:141] libmachine: (addons-019580) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1016 17:44:29.803546   13479 main.go:141] libmachine: (addons-019580) DBG |     </disk>
	I1016 17:44:29.803556   13479 main.go:141] libmachine: (addons-019580) DBG |     <disk type='file' device='disk'>
	I1016 17:44:29.803569   13479 main.go:141] libmachine: (addons-019580) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1016 17:44:29.803591   13479 main.go:141] libmachine: (addons-019580) DBG |       <source file='/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/addons-019580.rawdisk'/>
	I1016 17:44:29.803616   13479 main.go:141] libmachine: (addons-019580) DBG |       <target dev='hda' bus='virtio'/>
	I1016 17:44:29.803626   13479 main.go:141] libmachine: (addons-019580) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1016 17:44:29.803637   13479 main.go:141] libmachine: (addons-019580) DBG |     </disk>
	I1016 17:44:29.803648   13479 main.go:141] libmachine: (addons-019580) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1016 17:44:29.803669   13479 main.go:141] libmachine: (addons-019580) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1016 17:44:29.803683   13479 main.go:141] libmachine: (addons-019580) DBG |     </controller>
	I1016 17:44:29.803694   13479 main.go:141] libmachine: (addons-019580) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1016 17:44:29.803702   13479 main.go:141] libmachine: (addons-019580) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1016 17:44:29.803708   13479 main.go:141] libmachine: (addons-019580) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1016 17:44:29.803715   13479 main.go:141] libmachine: (addons-019580) DBG |     </controller>
	I1016 17:44:29.803720   13479 main.go:141] libmachine: (addons-019580) DBG |     <interface type='network'>
	I1016 17:44:29.803727   13479 main.go:141] libmachine: (addons-019580) DBG |       <mac address='52:54:00:d1:ad:4e'/>
	I1016 17:44:29.803733   13479 main.go:141] libmachine: (addons-019580) DBG |       <source network='mk-addons-019580'/>
	I1016 17:44:29.803737   13479 main.go:141] libmachine: (addons-019580) DBG |       <model type='virtio'/>
	I1016 17:44:29.803743   13479 main.go:141] libmachine: (addons-019580) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1016 17:44:29.803747   13479 main.go:141] libmachine: (addons-019580) DBG |     </interface>
	I1016 17:44:29.803752   13479 main.go:141] libmachine: (addons-019580) DBG |     <interface type='network'>
	I1016 17:44:29.803756   13479 main.go:141] libmachine: (addons-019580) DBG |       <mac address='52:54:00:09:7c:8b'/>
	I1016 17:44:29.803778   13479 main.go:141] libmachine: (addons-019580) DBG |       <source network='default'/>
	I1016 17:44:29.803802   13479 main.go:141] libmachine: (addons-019580) DBG |       <model type='virtio'/>
	I1016 17:44:29.803818   13479 main.go:141] libmachine: (addons-019580) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1016 17:44:29.803828   13479 main.go:141] libmachine: (addons-019580) DBG |     </interface>
	I1016 17:44:29.803836   13479 main.go:141] libmachine: (addons-019580) DBG |     <serial type='pty'>
	I1016 17:44:29.803931   13479 main.go:141] libmachine: (addons-019580) DBG |       <target type='isa-serial' port='0'>
	I1016 17:44:29.803988   13479 main.go:141] libmachine: (addons-019580) DBG |         <model name='isa-serial'/>
	I1016 17:44:29.804011   13479 main.go:141] libmachine: (addons-019580) DBG |       </target>
	I1016 17:44:29.804021   13479 main.go:141] libmachine: (addons-019580) DBG |     </serial>
	I1016 17:44:29.804033   13479 main.go:141] libmachine: (addons-019580) DBG |     <console type='pty'>
	I1016 17:44:29.804044   13479 main.go:141] libmachine: (addons-019580) DBG |       <target type='serial' port='0'/>
	I1016 17:44:29.804052   13479 main.go:141] libmachine: (addons-019580) DBG |     </console>
	I1016 17:44:29.804068   13479 main.go:141] libmachine: (addons-019580) DBG |     <input type='mouse' bus='ps2'/>
	I1016 17:44:29.804079   13479 main.go:141] libmachine: (addons-019580) DBG |     <input type='keyboard' bus='ps2'/>
	I1016 17:44:29.804089   13479 main.go:141] libmachine: (addons-019580) DBG |     <audio id='1' type='none'/>
	I1016 17:44:29.804109   13479 main.go:141] libmachine: (addons-019580) DBG |     <memballoon model='virtio'>
	I1016 17:44:29.804145   13479 main.go:141] libmachine: (addons-019580) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1016 17:44:29.804158   13479 main.go:141] libmachine: (addons-019580) DBG |     </memballoon>
	I1016 17:44:29.804168   13479 main.go:141] libmachine: (addons-019580) DBG |     <rng model='virtio'>
	I1016 17:44:29.804181   13479 main.go:141] libmachine: (addons-019580) DBG |       <backend model='random'>/dev/random</backend>
	I1016 17:44:29.804194   13479 main.go:141] libmachine: (addons-019580) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1016 17:44:29.804204   13479 main.go:141] libmachine: (addons-019580) DBG |     </rng>
	I1016 17:44:29.804214   13479 main.go:141] libmachine: (addons-019580) DBG |   </devices>
	I1016 17:44:29.804223   13479 main.go:141] libmachine: (addons-019580) DBG | </domain>
	I1016 17:44:29.804237   13479 main.go:141] libmachine: (addons-019580) DBG | 
	I1016 17:44:31.094921   13479 main.go:141] libmachine: (addons-019580) waiting for domain to start...
	I1016 17:44:31.096169   13479 main.go:141] libmachine: (addons-019580) domain is now running
	I1016 17:44:31.096191   13479 main.go:141] libmachine: (addons-019580) waiting for IP...
	I1016 17:44:31.096948   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:31.097429   13479 main.go:141] libmachine: (addons-019580) DBG | no network interface addresses found for domain addons-019580 (source=lease)
	I1016 17:44:31.097456   13479 main.go:141] libmachine: (addons-019580) DBG | trying to list again with source=arp
	I1016 17:44:31.097701   13479 main.go:141] libmachine: (addons-019580) DBG | unable to find current IP address of domain addons-019580 in network mk-addons-019580 (interfaces detected: [])
	I1016 17:44:31.097766   13479 main.go:141] libmachine: (addons-019580) DBG | I1016 17:44:31.097710   13507 retry.go:31] will retry after 308.308005ms: waiting for domain to come up
	I1016 17:44:31.407366   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:31.407909   13479 main.go:141] libmachine: (addons-019580) DBG | no network interface addresses found for domain addons-019580 (source=lease)
	I1016 17:44:31.407936   13479 main.go:141] libmachine: (addons-019580) DBG | trying to list again with source=arp
	I1016 17:44:31.408192   13479 main.go:141] libmachine: (addons-019580) DBG | unable to find current IP address of domain addons-019580 in network mk-addons-019580 (interfaces detected: [])
	I1016 17:44:31.408255   13479 main.go:141] libmachine: (addons-019580) DBG | I1016 17:44:31.408190   13507 retry.go:31] will retry after 342.720913ms: waiting for domain to come up
	I1016 17:44:31.752962   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:31.753512   13479 main.go:141] libmachine: (addons-019580) DBG | no network interface addresses found for domain addons-019580 (source=lease)
	I1016 17:44:31.753536   13479 main.go:141] libmachine: (addons-019580) DBG | trying to list again with source=arp
	I1016 17:44:31.753840   13479 main.go:141] libmachine: (addons-019580) DBG | unable to find current IP address of domain addons-019580 in network mk-addons-019580 (interfaces detected: [])
	I1016 17:44:31.753879   13479 main.go:141] libmachine: (addons-019580) DBG | I1016 17:44:31.753809   13507 retry.go:31] will retry after 319.500281ms: waiting for domain to come up
	I1016 17:44:32.075722   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:32.076309   13479 main.go:141] libmachine: (addons-019580) DBG | no network interface addresses found for domain addons-019580 (source=lease)
	I1016 17:44:32.076339   13479 main.go:141] libmachine: (addons-019580) DBG | trying to list again with source=arp
	I1016 17:44:32.076638   13479 main.go:141] libmachine: (addons-019580) DBG | unable to find current IP address of domain addons-019580 in network mk-addons-019580 (interfaces detected: [])
	I1016 17:44:32.076660   13479 main.go:141] libmachine: (addons-019580) DBG | I1016 17:44:32.076633   13507 retry.go:31] will retry after 444.214478ms: waiting for domain to come up
	I1016 17:44:32.522417   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:32.522933   13479 main.go:141] libmachine: (addons-019580) DBG | no network interface addresses found for domain addons-019580 (source=lease)
	I1016 17:44:32.522955   13479 main.go:141] libmachine: (addons-019580) DBG | trying to list again with source=arp
	I1016 17:44:32.523212   13479 main.go:141] libmachine: (addons-019580) DBG | unable to find current IP address of domain addons-019580 in network mk-addons-019580 (interfaces detected: [])
	I1016 17:44:32.523240   13479 main.go:141] libmachine: (addons-019580) DBG | I1016 17:44:32.523196   13507 retry.go:31] will retry after 752.042748ms: waiting for domain to come up
	I1016 17:44:33.277087   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:33.277604   13479 main.go:141] libmachine: (addons-019580) DBG | no network interface addresses found for domain addons-019580 (source=lease)
	I1016 17:44:33.277630   13479 main.go:141] libmachine: (addons-019580) DBG | trying to list again with source=arp
	I1016 17:44:33.277860   13479 main.go:141] libmachine: (addons-019580) DBG | unable to find current IP address of domain addons-019580 in network mk-addons-019580 (interfaces detected: [])
	I1016 17:44:33.277907   13479 main.go:141] libmachine: (addons-019580) DBG | I1016 17:44:33.277853   13507 retry.go:31] will retry after 614.358347ms: waiting for domain to come up
	I1016 17:44:33.893656   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:33.894212   13479 main.go:141] libmachine: (addons-019580) DBG | no network interface addresses found for domain addons-019580 (source=lease)
	I1016 17:44:33.894238   13479 main.go:141] libmachine: (addons-019580) DBG | trying to list again with source=arp
	I1016 17:44:33.894502   13479 main.go:141] libmachine: (addons-019580) DBG | unable to find current IP address of domain addons-019580 in network mk-addons-019580 (interfaces detected: [])
	I1016 17:44:33.894565   13479 main.go:141] libmachine: (addons-019580) DBG | I1016 17:44:33.894509   13507 retry.go:31] will retry after 1.115483412s: waiting for domain to come up
	I1016 17:44:35.011797   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:35.012333   13479 main.go:141] libmachine: (addons-019580) DBG | no network interface addresses found for domain addons-019580 (source=lease)
	I1016 17:44:35.012359   13479 main.go:141] libmachine: (addons-019580) DBG | trying to list again with source=arp
	I1016 17:44:35.012625   13479 main.go:141] libmachine: (addons-019580) DBG | unable to find current IP address of domain addons-019580 in network mk-addons-019580 (interfaces detected: [])
	I1016 17:44:35.012652   13479 main.go:141] libmachine: (addons-019580) DBG | I1016 17:44:35.012591   13507 retry.go:31] will retry after 1.160072306s: waiting for domain to come up
	I1016 17:44:36.175083   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:36.175608   13479 main.go:141] libmachine: (addons-019580) DBG | no network interface addresses found for domain addons-019580 (source=lease)
	I1016 17:44:36.175634   13479 main.go:141] libmachine: (addons-019580) DBG | trying to list again with source=arp
	I1016 17:44:36.175884   13479 main.go:141] libmachine: (addons-019580) DBG | unable to find current IP address of domain addons-019580 in network mk-addons-019580 (interfaces detected: [])
	I1016 17:44:36.175967   13479 main.go:141] libmachine: (addons-019580) DBG | I1016 17:44:36.175888   13507 retry.go:31] will retry after 1.306340779s: waiting for domain to come up
	I1016 17:44:37.484402   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:37.484889   13479 main.go:141] libmachine: (addons-019580) DBG | no network interface addresses found for domain addons-019580 (source=lease)
	I1016 17:44:37.484904   13479 main.go:141] libmachine: (addons-019580) DBG | trying to list again with source=arp
	I1016 17:44:37.485170   13479 main.go:141] libmachine: (addons-019580) DBG | unable to find current IP address of domain addons-019580 in network mk-addons-019580 (interfaces detected: [])
	I1016 17:44:37.485204   13479 main.go:141] libmachine: (addons-019580) DBG | I1016 17:44:37.485146   13507 retry.go:31] will retry after 1.769495611s: waiting for domain to come up
	I1016 17:44:39.256686   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:39.257252   13479 main.go:141] libmachine: (addons-019580) DBG | no network interface addresses found for domain addons-019580 (source=lease)
	I1016 17:44:39.257285   13479 main.go:141] libmachine: (addons-019580) DBG | trying to list again with source=arp
	I1016 17:44:39.257576   13479 main.go:141] libmachine: (addons-019580) DBG | unable to find current IP address of domain addons-019580 in network mk-addons-019580 (interfaces detected: [])
	I1016 17:44:39.257608   13479 main.go:141] libmachine: (addons-019580) DBG | I1016 17:44:39.257534   13507 retry.go:31] will retry after 1.795538224s: waiting for domain to come up
	I1016 17:44:41.055179   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:41.055593   13479 main.go:141] libmachine: (addons-019580) DBG | no network interface addresses found for domain addons-019580 (source=lease)
	I1016 17:44:41.055625   13479 main.go:141] libmachine: (addons-019580) DBG | trying to list again with source=arp
	I1016 17:44:41.055915   13479 main.go:141] libmachine: (addons-019580) DBG | unable to find current IP address of domain addons-019580 in network mk-addons-019580 (interfaces detected: [])
	I1016 17:44:41.055931   13479 main.go:141] libmachine: (addons-019580) DBG | I1016 17:44:41.055886   13507 retry.go:31] will retry after 3.039033445s: waiting for domain to come up
	I1016 17:44:44.096770   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:44.097198   13479 main.go:141] libmachine: (addons-019580) DBG | no network interface addresses found for domain addons-019580 (source=lease)
	I1016 17:44:44.097230   13479 main.go:141] libmachine: (addons-019580) DBG | trying to list again with source=arp
	I1016 17:44:44.097482   13479 main.go:141] libmachine: (addons-019580) DBG | unable to find current IP address of domain addons-019580 in network mk-addons-019580 (interfaces detected: [])
	I1016 17:44:44.097541   13479 main.go:141] libmachine: (addons-019580) DBG | I1016 17:44:44.097477   13507 retry.go:31] will retry after 3.027907999s: waiting for domain to come up
	I1016 17:44:47.128631   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:47.129150   13479 main.go:141] libmachine: (addons-019580) found domain IP: 192.168.39.210
	I1016 17:44:47.129182   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has current primary IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:47.129188   13479 main.go:141] libmachine: (addons-019580) reserving static IP address...
	I1016 17:44:47.129580   13479 main.go:141] libmachine: (addons-019580) DBG | unable to find host DHCP lease matching {name: "addons-019580", mac: "52:54:00:d1:ad:4e", ip: "192.168.39.210"} in network mk-addons-019580
	I1016 17:44:47.332938   13479 main.go:141] libmachine: (addons-019580) DBG | Getting to WaitForSSH function...
	I1016 17:44:47.332977   13479 main.go:141] libmachine: (addons-019580) reserved static IP address 192.168.39.210 for domain addons-019580
	I1016 17:44:47.332995   13479 main.go:141] libmachine: (addons-019580) waiting for SSH...
	I1016 17:44:47.335799   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:47.336270   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:44:47.336306   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:47.336543   13479 main.go:141] libmachine: (addons-019580) DBG | Using SSH client type: external
	I1016 17:44:47.336569   13479 main.go:141] libmachine: (addons-019580) DBG | Using SSH private key: /home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa (-rw-------)
	I1016 17:44:47.336629   13479 main.go:141] libmachine: (addons-019580) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1016 17:44:47.336653   13479 main.go:141] libmachine: (addons-019580) DBG | About to run SSH command:
	I1016 17:44:47.336666   13479 main.go:141] libmachine: (addons-019580) DBG | exit 0
	I1016 17:44:47.470939   13479 main.go:141] libmachine: (addons-019580) DBG | SSH cmd err, output: <nil>: 
	I1016 17:44:47.471190   13479 main.go:141] libmachine: (addons-019580) domain creation complete
	I1016 17:44:47.471527   13479 main.go:141] libmachine: (addons-019580) Calling .GetConfigRaw
	I1016 17:44:47.472092   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:44:47.472307   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:44:47.472496   13479 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1016 17:44:47.472507   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:44:47.474030   13479 main.go:141] libmachine: Detecting operating system of created instance...
	I1016 17:44:47.474056   13479 main.go:141] libmachine: Waiting for SSH to be available...
	I1016 17:44:47.474071   13479 main.go:141] libmachine: Getting to WaitForSSH function...
	I1016 17:44:47.474079   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:44:47.476708   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:47.477085   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:44:47.477111   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:47.477280   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:44:47.477447   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:44:47.477614   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:44:47.477711   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:44:47.477894   13479 main.go:141] libmachine: Using SSH client type: native
	I1016 17:44:47.478103   13479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I1016 17:44:47.478113   13479 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1016 17:44:47.578642   13479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 17:44:47.578672   13479 main.go:141] libmachine: Detecting the provisioner...
	I1016 17:44:47.578681   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:44:47.581803   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:47.582241   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:44:47.582277   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:47.582474   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:44:47.582743   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:44:47.582934   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:44:47.583093   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:44:47.583260   13479 main.go:141] libmachine: Using SSH client type: native
	I1016 17:44:47.583465   13479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I1016 17:44:47.583476   13479 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1016 17:44:47.687296   13479 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1016 17:44:47.687373   13479 main.go:141] libmachine: found compatible host: buildroot
	I1016 17:44:47.687388   13479 main.go:141] libmachine: Provisioning with buildroot...
	I1016 17:44:47.687399   13479 main.go:141] libmachine: (addons-019580) Calling .GetMachineName
	I1016 17:44:47.687685   13479 buildroot.go:166] provisioning hostname "addons-019580"
	I1016 17:44:47.687710   13479 main.go:141] libmachine: (addons-019580) Calling .GetMachineName
	I1016 17:44:47.687912   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:44:47.691028   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:47.691420   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:44:47.691459   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:47.691620   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:44:47.691838   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:44:47.692055   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:44:47.692188   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:44:47.692353   13479 main.go:141] libmachine: Using SSH client type: native
	I1016 17:44:47.692622   13479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I1016 17:44:47.692639   13479 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-019580 && echo "addons-019580" | sudo tee /etc/hostname
	I1016 17:44:47.812409   13479 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-019580
	
	I1016 17:44:47.812478   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:44:47.815640   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:47.816019   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:44:47.816049   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:47.816317   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:44:47.816510   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:44:47.816680   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:44:47.816834   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:44:47.817003   13479 main.go:141] libmachine: Using SSH client type: native
	I1016 17:44:47.817218   13479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I1016 17:44:47.817234   13479 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-019580' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-019580/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-019580' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 17:44:47.930021   13479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 17:44:47.930050   13479 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8816/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8816/.minikube}
	I1016 17:44:47.930101   13479 buildroot.go:174] setting up certificates
	I1016 17:44:47.930133   13479 provision.go:84] configureAuth start
	I1016 17:44:47.930150   13479 main.go:141] libmachine: (addons-019580) Calling .GetMachineName
	I1016 17:44:47.930434   13479 main.go:141] libmachine: (addons-019580) Calling .GetIP
	I1016 17:44:47.933425   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:47.933850   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:44:47.933899   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:47.934111   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:44:47.936618   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:47.937041   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:44:47.937069   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:47.937262   13479 provision.go:143] copyHostCerts
	I1016 17:44:47.937330   13479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8816/.minikube/key.pem (1675 bytes)
	I1016 17:44:47.937455   13479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8816/.minikube/ca.pem (1078 bytes)
	I1016 17:44:47.937533   13479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8816/.minikube/cert.pem (1123 bytes)
	I1016 17:44:47.937597   13479 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca-key.pem org=jenkins.addons-019580 san=[127.0.0.1 192.168.39.210 addons-019580 localhost minikube]
	I1016 17:44:48.032676   13479 provision.go:177] copyRemoteCerts
	I1016 17:44:48.032732   13479 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 17:44:48.032762   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:44:48.035683   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:48.036017   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:44:48.036055   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:48.036233   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:44:48.036441   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:44:48.036569   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:44:48.036701   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	I1016 17:44:48.119055   13479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 17:44:48.163311   13479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1016 17:44:48.197896   13479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1016 17:44:48.232919   13479 provision.go:87] duration metric: took 302.768834ms to configureAuth
	I1016 17:44:48.232953   13479 buildroot.go:189] setting minikube options for container-runtime
	I1016 17:44:48.233171   13479 config.go:182] Loaded profile config "addons-019580": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:44:48.233262   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:44:48.236467   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:48.236901   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:44:48.237017   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:48.237299   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:44:48.237488   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:44:48.237620   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:44:48.237756   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:44:48.237899   13479 main.go:141] libmachine: Using SSH client type: native
	I1016 17:44:48.238129   13479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I1016 17:44:48.238149   13479 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 17:44:48.465710   13479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 17:44:48.465740   13479 main.go:141] libmachine: Checking connection to Docker...
	I1016 17:44:48.465748   13479 main.go:141] libmachine: (addons-019580) Calling .GetURL
	I1016 17:44:48.467398   13479 main.go:141] libmachine: (addons-019580) DBG | using libvirt version 8000000
	I1016 17:44:48.470222   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:48.470535   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:44:48.470570   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:48.470901   13479 main.go:141] libmachine: Docker is up and running!
	I1016 17:44:48.470916   13479 main.go:141] libmachine: Reticulating splines...
	I1016 17:44:48.470923   13479 client.go:171] duration metric: took 20.532341283s to LocalClient.Create
	I1016 17:44:48.470941   13479 start.go:167] duration metric: took 20.532404756s to libmachine.API.Create "addons-019580"
	I1016 17:44:48.470948   13479 start.go:293] postStartSetup for "addons-019580" (driver="kvm2")
	I1016 17:44:48.470959   13479 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 17:44:48.470973   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:44:48.471219   13479 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 17:44:48.471242   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:44:48.473774   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:48.474203   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:44:48.474229   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:48.474349   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:44:48.474511   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:44:48.474711   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:44:48.474847   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	I1016 17:44:48.556428   13479 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 17:44:48.560983   13479 info.go:137] Remote host: Buildroot 2025.02
	I1016 17:44:48.561012   13479 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8816/.minikube/addons for local assets ...
	I1016 17:44:48.561104   13479 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8816/.minikube/files for local assets ...
	I1016 17:44:48.561161   13479 start.go:296] duration metric: took 90.203581ms for postStartSetup
	I1016 17:44:48.561212   13479 main.go:141] libmachine: (addons-019580) Calling .GetConfigRaw
	I1016 17:44:48.561788   13479 main.go:141] libmachine: (addons-019580) Calling .GetIP
	I1016 17:44:48.564508   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:48.564923   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:44:48.564947   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:48.565239   13479 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/config.json ...
	I1016 17:44:48.565445   13479 start.go:128] duration metric: took 20.642974162s to createHost
	I1016 17:44:48.565469   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:44:48.567841   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:48.568184   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:44:48.568209   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:48.568456   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:44:48.568645   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:44:48.568785   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:44:48.568948   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:44:48.569095   13479 main.go:141] libmachine: Using SSH client type: native
	I1016 17:44:48.569374   13479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I1016 17:44:48.569392   13479 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1016 17:44:48.673785   13479 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760636688.634205700
	
	I1016 17:44:48.673815   13479 fix.go:216] guest clock: 1760636688.634205700
	I1016 17:44:48.673825   13479 fix.go:229] Guest: 2025-10-16 17:44:48.6342057 +0000 UTC Remote: 2025-10-16 17:44:48.565456958 +0000 UTC m=+20.760349525 (delta=68.748742ms)
	I1016 17:44:48.673871   13479 fix.go:200] guest clock delta is within tolerance: 68.748742ms
	I1016 17:44:48.673879   13479 start.go:83] releasing machines lock for "addons-019580", held for 20.751494721s
	I1016 17:44:48.673915   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:44:48.674258   13479 main.go:141] libmachine: (addons-019580) Calling .GetIP
	I1016 17:44:48.677288   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:48.677796   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:44:48.677842   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:48.677997   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:44:48.678538   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:44:48.678741   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:44:48.678854   13479 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 17:44:48.678907   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:44:48.678955   13479 ssh_runner.go:195] Run: cat /version.json
	I1016 17:44:48.678979   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:44:48.681973   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:48.682373   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:48.682409   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:44:48.682430   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:48.682610   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:44:48.682815   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:44:48.682860   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:44:48.682875   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:48.682991   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:44:48.683195   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:44:48.683184   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	I1016 17:44:48.683372   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:44:48.683498   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:44:48.683765   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	I1016 17:44:48.766615   13479 ssh_runner.go:195] Run: systemctl --version
	I1016 17:44:48.809396   13479 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 17:44:48.975814   13479 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 17:44:48.986620   13479 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 17:44:48.986688   13479 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 17:44:49.019459   13479 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1016 17:44:49.019497   13479 start.go:495] detecting cgroup driver to use...
	I1016 17:44:49.019579   13479 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 17:44:49.044942   13479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 17:44:49.062349   13479 docker.go:218] disabling cri-docker service (if available) ...
	I1016 17:44:49.062435   13479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 17:44:49.081102   13479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 17:44:49.098256   13479 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 17:44:49.245349   13479 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 17:44:49.460581   13479 docker.go:234] disabling docker service ...
	I1016 17:44:49.460647   13479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 17:44:49.476842   13479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 17:44:49.491542   13479 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 17:44:49.641983   13479 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 17:44:49.784914   13479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 17:44:49.800564   13479 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 17:44:49.822400   13479 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 17:44:49.822469   13479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:49.834215   13479 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 17:44:49.834284   13479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:49.846756   13479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:49.858563   13479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:49.870894   13479 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 17:44:49.885690   13479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:49.897403   13479 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:49.916836   13479 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:49.928837   13479 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 17:44:49.939572   13479 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1016 17:44:49.939648   13479 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1016 17:44:49.959157   13479 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 17:44:49.971370   13479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 17:44:50.108113   13479 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 17:44:50.218472   13479 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 17:44:50.218569   13479 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 17:44:50.224866   13479 start.go:563] Will wait 60s for crictl version
	I1016 17:44:50.224957   13479 ssh_runner.go:195] Run: which crictl
	I1016 17:44:50.229170   13479 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1016 17:44:50.269705   13479 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1016 17:44:50.269823   13479 ssh_runner.go:195] Run: crio --version
	I1016 17:44:50.299396   13479 ssh_runner.go:195] Run: crio --version
	I1016 17:44:50.330787   13479 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1016 17:44:50.332129   13479 main.go:141] libmachine: (addons-019580) Calling .GetIP
	I1016 17:44:50.334651   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:50.335011   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:44:50.335031   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:44:50.335289   13479 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1016 17:44:50.339652   13479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 17:44:50.353943   13479 kubeadm.go:883] updating cluster {Name:addons-019580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-019580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 17:44:50.354098   13479 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 17:44:50.354161   13479 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 17:44:50.388399   13479 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1016 17:44:50.388481   13479 ssh_runner.go:195] Run: which lz4
	I1016 17:44:50.392786   13479 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1016 17:44:50.397306   13479 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1016 17:44:50.397342   13479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1016 17:44:51.836466   13479 crio.go:462] duration metric: took 1.443721322s to copy over tarball
	I1016 17:44:51.836547   13479 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1016 17:44:53.468289   13479 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.631712904s)
	I1016 17:44:53.468323   13479 crio.go:469] duration metric: took 1.63182152s to extract the tarball
	I1016 17:44:53.468336   13479 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1016 17:44:53.509314   13479 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 17:44:53.552921   13479 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 17:44:53.552943   13479 cache_images.go:85] Images are preloaded, skipping loading
	I1016 17:44:53.552950   13479 kubeadm.go:934] updating node { 192.168.39.210 8443 v1.34.1 crio true true} ...
	I1016 17:44:53.553035   13479 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-019580 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-019580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 17:44:53.553096   13479 ssh_runner.go:195] Run: crio config
	I1016 17:44:53.598800   13479 cni.go:84] Creating CNI manager for ""
	I1016 17:44:53.598831   13479 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1016 17:44:53.598849   13479 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 17:44:53.598877   13479 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.210 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-019580 NodeName:addons-019580 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 17:44:53.599019   13479 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-019580"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.210"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.210"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 17:44:53.599090   13479 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 17:44:53.611179   13479 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 17:44:53.611254   13479 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 17:44:53.622986   13479 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1016 17:44:53.643283   13479 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 17:44:53.663580   13479 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1016 17:44:53.683778   13479 ssh_runner.go:195] Run: grep 192.168.39.210	control-plane.minikube.internal$ /etc/hosts
	I1016 17:44:53.687734   13479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 17:44:53.702412   13479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 17:44:53.842067   13479 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 17:44:53.878144   13479 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580 for IP: 192.168.39.210
	I1016 17:44:53.878173   13479 certs.go:195] generating shared ca certs ...
	I1016 17:44:53.878198   13479 certs.go:227] acquiring lock for ca certs: {Name:mkad193a0fb33fec0ea18d9a56f494b9b8779adb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:53.878374   13479 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8816/.minikube/ca.key
	I1016 17:44:54.197408   13479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8816/.minikube/ca.crt ...
	I1016 17:44:54.197438   13479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8816/.minikube/ca.crt: {Name:mkfb3ceddd0dbbe98db5605123005b9152d936ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:54.197665   13479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8816/.minikube/ca.key ...
	I1016 17:44:54.197679   13479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8816/.minikube/ca.key: {Name:mk0dcc69719ffc60cb55d818cb2199ee66a79bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:54.197757   13479 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8816/.minikube/proxy-client-ca.key
	I1016 17:44:54.330981   13479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8816/.minikube/proxy-client-ca.crt ...
	I1016 17:44:54.331010   13479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8816/.minikube/proxy-client-ca.crt: {Name:mk62f59d67105553df0d1176d1e9f68bd3c0fa88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:54.331170   13479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8816/.minikube/proxy-client-ca.key ...
	I1016 17:44:54.331181   13479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8816/.minikube/proxy-client-ca.key: {Name:mkebdfa19338268f07f6c5acd982532505ec689c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:54.331249   13479 certs.go:257] generating profile certs ...
	I1016 17:44:54.331298   13479 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.key
	I1016 17:44:54.331319   13479 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt with IP's: []
	I1016 17:44:54.584460   13479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt ...
	I1016 17:44:54.584488   13479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: {Name:mk3ea3576c01ab52443da8229c0fd7b595694e71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:54.584649   13479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.key ...
	I1016 17:44:54.584660   13479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.key: {Name:mkbefe1b71830e8b433d0ea6614f9e084dbd116c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:54.584727   13479 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/apiserver.key.8e9495d5
	I1016 17:44:54.584745   13479 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/apiserver.crt.8e9495d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.210]
	I1016 17:44:54.898745   13479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/apiserver.crt.8e9495d5 ...
	I1016 17:44:54.898771   13479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/apiserver.crt.8e9495d5: {Name:mkb38405f9c704efafd097935f724ca323d0d288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:54.898925   13479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/apiserver.key.8e9495d5 ...
	I1016 17:44:54.898938   13479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/apiserver.key.8e9495d5: {Name:mkd7a9482625e46a26e7f01b285c6c3ef40a1a8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:54.899003   13479 certs.go:382] copying /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/apiserver.crt.8e9495d5 -> /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/apiserver.crt
	I1016 17:44:54.899091   13479 certs.go:386] copying /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/apiserver.key.8e9495d5 -> /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/apiserver.key
	I1016 17:44:54.899163   13479 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/proxy-client.key
	I1016 17:44:54.899181   13479 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/proxy-client.crt with IP's: []
	I1016 17:44:55.011057   13479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/proxy-client.crt ...
	I1016 17:44:55.011084   13479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/proxy-client.crt: {Name:mk7ef195cc46b8060d68b112f22a8cd8cf1d1a55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:55.011260   13479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/proxy-client.key ...
	I1016 17:44:55.011270   13479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/proxy-client.key: {Name:mkf55f32d9b4a22b6fa8c610296b56e1a4c5c72b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:55.011455   13479 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 17:44:55.011490   13479 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca.pem (1078 bytes)
	I1016 17:44:55.011511   13479 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/cert.pem (1123 bytes)
	I1016 17:44:55.011532   13479 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/key.pem (1675 bytes)
	I1016 17:44:55.012044   13479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 17:44:55.045458   13479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 17:44:55.079310   13479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 17:44:55.112209   13479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 17:44:55.143556   13479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1016 17:44:55.172519   13479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 17:44:55.204050   13479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 17:44:55.234832   13479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1016 17:44:55.264241   13479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 17:44:55.294273   13479 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 17:44:55.313921   13479 ssh_runner.go:195] Run: openssl version
	I1016 17:44:55.320252   13479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 17:44:55.332729   13479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 17:44:55.337751   13479 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 17:44:55.337804   13479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 17:44:55.344872   13479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 17:44:55.357359   13479 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 17:44:55.362113   13479 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1016 17:44:55.362169   13479 kubeadm.go:400] StartCluster: {Name:addons-019580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-019580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 17:44:55.362243   13479 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 17:44:55.362292   13479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 17:44:55.400529   13479 cri.go:89] found id: ""
	I1016 17:44:55.400591   13479 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 17:44:55.412306   13479 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 17:44:55.423591   13479 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 17:44:55.435008   13479 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1016 17:44:55.435025   13479 kubeadm.go:157] found existing configuration files:
	
	I1016 17:44:55.435065   13479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1016 17:44:55.445675   13479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1016 17:44:55.445733   13479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1016 17:44:55.463285   13479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1016 17:44:55.474236   13479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1016 17:44:55.474315   13479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 17:44:55.487538   13479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1016 17:44:55.503318   13479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1016 17:44:55.503370   13479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1016 17:44:55.517559   13479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1016 17:44:55.534942   13479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1016 17:44:55.535023   13479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 17:44:55.546961   13479 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1016 17:44:55.596937   13479 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1016 17:44:55.597004   13479 kubeadm.go:318] [preflight] Running pre-flight checks
	I1016 17:44:55.694586   13479 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1016 17:44:55.694711   13479 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1016 17:44:55.694792   13479 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1016 17:44:55.706079   13479 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1016 17:44:55.842626   13479 out.go:252]   - Generating certificates and keys ...
	I1016 17:44:55.842727   13479 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 17:44:55.842784   13479 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 17:44:55.842883   13479 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1016 17:44:56.371031   13479 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1016 17:44:56.557858   13479 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1016 17:44:56.783588   13479 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1016 17:44:57.023589   13479 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1016 17:44:57.023828   13479 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-019580 localhost] and IPs [192.168.39.210 127.0.0.1 ::1]
	I1016 17:44:57.436176   13479 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1016 17:44:57.436438   13479 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-019580 localhost] and IPs [192.168.39.210 127.0.0.1 ::1]
	I1016 17:44:57.729350   13479 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 17:44:57.942535   13479 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 17:44:58.353944   13479 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 17:44:58.354016   13479 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 17:44:58.545356   13479 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 17:44:58.712256   13479 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 17:44:58.966747   13479 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 17:44:59.378450   13479 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 17:45:00.252389   13479 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 17:45:00.252666   13479 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 17:45:00.258463   13479 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 17:45:00.280503   13479 out.go:252]   - Booting up control plane ...
	I1016 17:45:00.280682   13479 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 17:45:00.280800   13479 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 17:45:00.280886   13479 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 17:45:00.291965   13479 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 17:45:00.292086   13479 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 17:45:00.298651   13479 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 17:45:00.298912   13479 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 17:45:00.298965   13479 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 17:45:00.459150   13479 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 17:45:00.459520   13479 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 17:45:00.960791   13479 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.066802ms
	I1016 17:45:00.964389   13479 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 17:45:00.964497   13479 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.210:8443/livez
	I1016 17:45:00.964616   13479 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 17:45:00.964745   13479 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 17:45:03.715079   13479 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.752814787s
	I1016 17:45:05.062201   13479 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.101350282s
	I1016 17:45:06.961944   13479 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.00238182s
	I1016 17:45:06.975408   13479 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 17:45:06.993969   13479 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 17:45:07.015615   13479 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 17:45:07.015925   13479 kubeadm.go:318] [mark-control-plane] Marking the node addons-019580 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 17:45:07.037137   13479 kubeadm.go:318] [bootstrap-token] Using token: iy8koo.ob47p7ngmhdo6rin
	I1016 17:45:07.038391   13479 out.go:252]   - Configuring RBAC rules ...
	I1016 17:45:07.038671   13479 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 17:45:07.050869   13479 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 17:45:07.062485   13479 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 17:45:07.067232   13479 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 17:45:07.073557   13479 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 17:45:07.078222   13479 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 17:45:07.371383   13479 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 17:45:07.797521   13479 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 17:45:08.369354   13479 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 17:45:08.370592   13479 kubeadm.go:318] 
	I1016 17:45:08.370657   13479 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 17:45:08.370664   13479 kubeadm.go:318] 
	I1016 17:45:08.370724   13479 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 17:45:08.370730   13479 kubeadm.go:318] 
	I1016 17:45:08.370750   13479 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 17:45:08.370807   13479 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 17:45:08.370856   13479 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 17:45:08.370866   13479 kubeadm.go:318] 
	I1016 17:45:08.370922   13479 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 17:45:08.370930   13479 kubeadm.go:318] 
	I1016 17:45:08.371004   13479 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 17:45:08.371016   13479 kubeadm.go:318] 
	I1016 17:45:08.371069   13479 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 17:45:08.371151   13479 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 17:45:08.371226   13479 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 17:45:08.371236   13479 kubeadm.go:318] 
	I1016 17:45:08.371338   13479 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 17:45:08.371444   13479 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 17:45:08.371453   13479 kubeadm.go:318] 
	I1016 17:45:08.371558   13479 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token iy8koo.ob47p7ngmhdo6rin \
	I1016 17:45:08.371700   13479 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8f26974a037d800951ad69ff370a86700bc23eb1aa66fc0596ad091c23163bb4 \
	I1016 17:45:08.371729   13479 kubeadm.go:318] 	--control-plane 
	I1016 17:45:08.371737   13479 kubeadm.go:318] 
	I1016 17:45:08.371853   13479 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 17:45:08.371868   13479 kubeadm.go:318] 
	I1016 17:45:08.372171   13479 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token iy8koo.ob47p7ngmhdo6rin \
	I1016 17:45:08.372282   13479 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8f26974a037d800951ad69ff370a86700bc23eb1aa66fc0596ad091c23163bb4 
	I1016 17:45:08.373676   13479 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1016 17:45:08.373707   13479 cni.go:84] Creating CNI manager for ""
	I1016 17:45:08.373728   13479 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1016 17:45:08.375278   13479 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1016 17:45:08.376594   13479 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1016 17:45:08.390350   13479 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1016 17:45:08.416744   13479 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 17:45:08.416825   13479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:45:08.416902   13479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-019580 minikube.k8s.io/updated_at=2025_10_16T17_45_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=addons-019580 minikube.k8s.io/primary=true
	I1016 17:45:08.566396   13479 ops.go:34] apiserver oom_adj: -16
	I1016 17:45:08.566519   13479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:45:09.067235   13479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:45:09.567574   13479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:45:10.066794   13479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:45:10.567373   13479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:45:11.067396   13479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:45:11.567196   13479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:45:12.066741   13479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:45:12.566977   13479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:45:13.066741   13479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:45:13.160642   13479 kubeadm.go:1113] duration metric: took 4.743880268s to wait for elevateKubeSystemPrivileges
	I1016 17:45:13.160678   13479 kubeadm.go:402] duration metric: took 17.798511384s to StartCluster
	I1016 17:45:13.160701   13479 settings.go:142] acquiring lock: {Name:mk8956f02e21b33221420cc620d69233a6a526cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:45:13.160843   13479 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8816/kubeconfig
	I1016 17:45:13.161360   13479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8816/kubeconfig: {Name:mk4f128d20bbd14d57d7fe32f778269e6fd1a04c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:45:13.161596   13479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 17:45:13.161616   13479 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 17:45:13.161682   13479 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1016 17:45:13.161810   13479 addons.go:69] Setting yakd=true in profile "addons-019580"
	I1016 17:45:13.161839   13479 config.go:182] Loaded profile config "addons-019580": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:45:13.161844   13479 addons.go:69] Setting inspektor-gadget=true in profile "addons-019580"
	I1016 17:45:13.161859   13479 addons.go:238] Setting addon yakd=true in "addons-019580"
	I1016 17:45:13.161861   13479 addons.go:69] Setting default-storageclass=true in profile "addons-019580"
	I1016 17:45:13.161882   13479 addons.go:238] Setting addon inspektor-gadget=true in "addons-019580"
	I1016 17:45:13.161887   13479 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-019580"
	I1016 17:45:13.161889   13479 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-019580"
	I1016 17:45:13.161896   13479 host.go:66] Checking if "addons-019580" exists ...
	I1016 17:45:13.161905   13479 addons.go:69] Setting storage-provisioner=true in profile "addons-019580"
	I1016 17:45:13.161886   13479 addons.go:69] Setting registry-creds=true in profile "addons-019580"
	I1016 17:45:13.161918   13479 addons.go:238] Setting addon storage-provisioner=true in "addons-019580"
	I1016 17:45:13.161922   13479 host.go:66] Checking if "addons-019580" exists ...
	I1016 17:45:13.161932   13479 addons.go:238] Setting addon registry-creds=true in "addons-019580"
	I1016 17:45:13.161941   13479 host.go:66] Checking if "addons-019580" exists ...
	I1016 17:45:13.161971   13479 host.go:66] Checking if "addons-019580" exists ...
	I1016 17:45:13.162095   13479 addons.go:69] Setting volcano=true in profile "addons-019580"
	I1016 17:45:13.162134   13479 addons.go:238] Setting addon volcano=true in "addons-019580"
	I1016 17:45:13.162174   13479 host.go:66] Checking if "addons-019580" exists ...
	I1016 17:45:13.162370   13479 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-019580"
	I1016 17:45:13.162373   13479 addons.go:69] Setting metrics-server=true in profile "addons-019580"
	I1016 17:45:13.162382   13479 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-019580"
	I1016 17:45:13.162384   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.162390   13479 addons.go:238] Setting addon metrics-server=true in "addons-019580"
	I1016 17:45:13.162402   13479 host.go:66] Checking if "addons-019580" exists ...
	I1016 17:45:13.162407   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.162420   13479 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-019580"
	I1016 17:45:13.162426   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.162432   13479 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-019580"
	I1016 17:45:13.161898   13479 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-019580"
	I1016 17:45:13.162446   13479 host.go:66] Checking if "addons-019580" exists ...
	I1016 17:45:13.162447   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.162451   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.162477   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.162609   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.162639   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.162753   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.162814   13479 addons.go:69] Setting registry=true in profile "addons-019580"
	I1016 17:45:13.162877   13479 addons.go:238] Setting addon registry=true in "addons-019580"
	I1016 17:45:13.162896   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.162916   13479 host.go:66] Checking if "addons-019580" exists ...
	I1016 17:45:13.162411   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.163270   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.163396   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.162816   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.162827   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.163919   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.162829   13479 addons.go:69] Setting volumesnapshots=true in profile "addons-019580"
	I1016 17:45:13.164264   13479 addons.go:238] Setting addon volumesnapshots=true in "addons-019580"
	I1016 17:45:13.164289   13479 host.go:66] Checking if "addons-019580" exists ...
	I1016 17:45:13.165250   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.165271   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.162838   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.166499   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.166694   13479 out.go:179] * Verifying Kubernetes components...
	I1016 17:45:13.162840   13479 addons.go:69] Setting cloud-spanner=true in profile "addons-019580"
	I1016 17:45:13.167646   13479 addons.go:238] Setting addon cloud-spanner=true in "addons-019580"
	I1016 17:45:13.162845   13479 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-019580"
	I1016 17:45:13.162849   13479 addons.go:69] Setting ingress=true in profile "addons-019580"
	I1016 17:45:13.162849   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.162414   13479 host.go:66] Checking if "addons-019580" exists ...
	I1016 17:45:13.162854   13479 addons.go:69] Setting gcp-auth=true in profile "addons-019580"
	I1016 17:45:13.162855   13479 addons.go:69] Setting ingress-dns=true in profile "addons-019580"
	I1016 17:45:13.167882   13479 addons.go:238] Setting addon ingress-dns=true in "addons-019580"
	I1016 17:45:13.168453   13479 host.go:66] Checking if "addons-019580" exists ...
	I1016 17:45:13.168948   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.167908   13479 addons.go:238] Setting addon ingress=true in "addons-019580"
	I1016 17:45:13.168175   13479 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-019580"
	I1016 17:45:13.169155   13479 host.go:66] Checking if "addons-019580" exists ...
	I1016 17:45:13.169528   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.169554   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.169646   13479 host.go:66] Checking if "addons-019580" exists ...
	I1016 17:45:13.169795   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.169964   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.170028   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.168424   13479 host.go:66] Checking if "addons-019580" exists ...
	I1016 17:45:13.168590   13479 mustload.go:65] Loading cluster: addons-019580
	I1016 17:45:13.168612   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.172807   13479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 17:45:13.172830   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.172859   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.173528   13479 config.go:182] Loaded profile config "addons-019580": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:45:13.174001   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.174031   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.174581   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.174649   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.191734   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46165
	I1016 17:45:13.192533   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.193153   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38785
	I1016 17:45:13.196318   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.196341   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.196864   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.197249   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.197733   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.197751   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.198385   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.198419   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.198661   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.199346   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.199371   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.209231   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33463
	I1016 17:45:13.209232   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35465
	I1016 17:45:13.210160   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.210825   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.210843   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.210988   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35541
	I1016 17:45:13.211668   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.212200   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.212219   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.212770   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.213071   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33425
	I1016 17:45:13.213640   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.213678   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.214258   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.214483   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42055
	I1016 17:45:13.214805   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39747
	I1016 17:45:13.215346   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.215361   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.215433   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.215928   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.215947   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.216056   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.216931   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.217004   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35705
	I1016 17:45:13.217129   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:45:13.217338   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.217552   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.217614   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.217921   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.217951   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.218521   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.218605   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.218677   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.219054   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.219070   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.219393   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.219821   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39511
	I1016 17:45:13.219907   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.219938   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.219966   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.219982   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.219985   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33915
	I1016 17:45:13.220098   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.220112   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.220400   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.220494   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.220961   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.220986   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.221426   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.222688   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.222853   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.224903   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.225320   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:45:13.225368   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.225407   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.230673   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.231155   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.231170   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.231515   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.232076   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.232099   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.234999   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39327
	I1016 17:45:13.235957   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.236494   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.236535   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.239188   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.239212   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38555
	I1016 17:45:13.239215   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43723
	I1016 17:45:13.239531   13479 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-019580"
	I1016 17:45:13.239581   13479 host.go:66] Checking if "addons-019580" exists ...
	I1016 17:45:13.239949   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.240004   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.243974   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35689
	I1016 17:45:13.246679   13479 addons.go:238] Setting addon default-storageclass=true in "addons-019580"
	I1016 17:45:13.246716   13479 host.go:66] Checking if "addons-019580" exists ...
	I1016 17:45:13.246917   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46033
	I1016 17:45:13.247070   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.247105   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.247133   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34123
	I1016 17:45:13.247672   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.248014   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.248203   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.248293   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.248377   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:45:13.248999   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.249028   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.249184   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.249195   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.249991   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.250060   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.250511   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:45:13.250802   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.250835   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.250913   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.251383   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.251425   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.251499   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.251522   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.251981   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.252005   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.252079   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.252437   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.252710   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:45:13.253946   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:45:13.254139   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.254267   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:45:13.255632   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44919
	I1016 17:45:13.256726   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40711
	I1016 17:45:13.257479   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:45:13.258795   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41477
	I1016 17:45:13.259101   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.259305   13479 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 17:45:13.259940   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44299
	I1016 17:45:13.260048   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.260064   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.260537   13479 host.go:66] Checking if "addons-019580" exists ...
	I1016 17:45:13.260584   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.260913   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44233
	I1016 17:45:13.260948   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.260980   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.261186   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.261931   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.261964   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.262278   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.262334   13479 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 17:45:13.262348   13479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 17:45:13.262366   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:45:13.262458   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.262470   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.263224   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:45:13.263634   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:45:13.264230   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.264250   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.264511   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:45:13.264270   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.264903   13479 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1016 17:45:13.265567   13479 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1016 17:45:13.265950   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.265964   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.266092   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.266102   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.266176   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:13.266186   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:13.266234   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.266384   13479 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1016 17:45:13.266397   13479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1016 17:45:13.266416   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:45:13.267017   13479 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1016 17:45:13.267034   13479 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1016 17:45:13.267061   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:45:13.268633   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:13.268642   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:45:13.268678   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:13.268687   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:13.268696   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:13.268703   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:13.268715   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.269072   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:13.269100   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:13.269107   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	W1016 17:45:13.269191   13479 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1016 17:45:13.270780   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.270834   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.271467   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.271491   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.272087   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.274229   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33427
	I1016 17:45:13.274377   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.274413   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44697
	I1016 17:45:13.274991   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.275015   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.275640   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.275690   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.278855   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.278917   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.279022   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41979
	I1016 17:45:13.279160   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:45:13.279229   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:45:13.279249   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.279272   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.279291   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:45:13.279364   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:45:13.279562   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	I1016 17:45:13.282400   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:45:13.282496   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:45:13.282523   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.282552   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:45:13.282608   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:45:13.282622   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.282653   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:45:13.283222   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:45:13.283309   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:45:13.283423   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43485
	I1016 17:45:13.283514   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.283534   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.283407   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:45:13.283853   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	I1016 17:45:13.284236   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.284341   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.284499   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36907
	I1016 17:45:13.284804   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.284822   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.284925   13479 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1016 17:45:13.284995   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.284951   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.285268   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:45:13.285335   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.285376   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.285388   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.285418   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.286046   13479 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1016 17:45:13.286060   13479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1016 17:45:13.286075   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:45:13.286105   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:45:13.286077   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:45:13.286266   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.286278   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.286533   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.286894   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:45:13.286973   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.286999   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.286982   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38155
	I1016 17:45:13.287446   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.287515   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.287541   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	I1016 17:45:13.287772   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:45:13.289305   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.289347   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.290305   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.290980   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.291020   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.291243   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38143
	I1016 17:45:13.291959   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:45:13.291963   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.292593   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:45:13.292641   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:45:13.292957   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.293626   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.293642   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.294046   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.294491   13479 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1016 17:45:13.294594   13479 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1016 17:45:13.294749   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:13.294778   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46649
	I1016 17:45:13.294793   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:13.294909   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:45:13.294971   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38121
	I1016 17:45:13.295008   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:45:13.296079   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.296223   13479 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1016 17:45:13.296849   13479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1016 17:45:13.296870   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:45:13.296361   13479 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1016 17:45:13.296918   13479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1016 17:45:13.296930   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:45:13.297244   13479 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1016 17:45:13.297378   13479 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1016 17:45:13.298040   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.298144   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.298201   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.298695   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.299298   13479 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1016 17:45:13.299313   13479 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1016 17:45:13.299333   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:45:13.299489   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:45:13.299655   13479 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1016 17:45:13.299999   13479 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1016 17:45:13.299902   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:45:13.299937   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:45:13.300320   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.300414   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:45:13.300556   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:45:13.300742   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.300853   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	I1016 17:45:13.302127   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.302146   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.302734   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:45:13.303729   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.304293   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:45:13.304606   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:45:13.307318   13479 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1016 17:45:13.308555   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41171
	I1016 17:45:13.308737   13479 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1016 17:45:13.308779   13479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1016 17:45:13.308808   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:45:13.309679   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.310646   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.311030   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.310915   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.312088   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.312579   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.313067   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:45:13.313797   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.315361   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:45:13.315378   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:45:13.315397   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:45:13.315457   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:45:13.315491   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.315542   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:45:13.315566   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I1016 17:45:13.315365   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:45:13.315610   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.315664   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:45:13.315870   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:45:13.315914   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:45:13.316014   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.316026   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:45:13.316045   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:45:13.316056   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:45:13.316062   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.316155   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.316183   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:45:13.316252   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:45:13.316246   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	I1016 17:45:13.316258   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.316273   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.316418   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:45:13.316665   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	I1016 17:45:13.316760   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.316974   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:45:13.317032   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:45:13.317593   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:45:13.317558   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.317642   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.317852   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:45:13.317966   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:45:13.318236   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.318364   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:45:13.318378   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	I1016 17:45:13.318445   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:45:13.318488   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	I1016 17:45:13.318875   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
	I1016 17:45:13.318934   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:45:13.319152   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	I1016 17:45:13.319678   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.320316   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.320344   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.320466   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33861
	I1016 17:45:13.320849   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.320871   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:45:13.320958   13479 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1016 17:45:13.321059   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:45:13.321081   13479 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1016 17:45:13.321438   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.321993   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.322018   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.322033   13479 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1016 17:45:13.322047   13479 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1016 17:45:13.322063   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:45:13.322256   13479 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1016 17:45:13.322588   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.322810   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:45:13.323523   13479 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1016 17:45:13.323677   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33599
	I1016 17:45:13.324075   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:13.324609   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:45:13.324697   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:13.324714   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:13.324973   13479 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1016 17:45:13.325093   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:13.325300   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:45:13.325753   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:45:13.325945   13479 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1016 17:45:13.326019   13479 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1016 17:45:13.326028   13479 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 17:45:13.326049   13479 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 17:45:13.326068   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:45:13.327033   13479 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1016 17:45:13.327630   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:45:13.327651   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.327830   13479 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1016 17:45:13.327843   13479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1016 17:45:13.327867   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:45:13.328071   13479 out.go:179]   - Using image docker.io/registry:3.0.0
	I1016 17:45:13.328540   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:45:13.328659   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.329099   13479 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1016 17:45:13.329211   13479 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1016 17:45:13.329650   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:45:13.330102   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:45:13.330150   13479 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1016 17:45:13.330201   13479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1016 17:45:13.330367   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:45:13.330407   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:45:13.330486   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	I1016 17:45:13.332370   13479 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1016 17:45:13.332368   13479 out.go:179]   - Using image docker.io/busybox:stable
	I1016 17:45:13.333374   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.333634   13479 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1016 17:45:13.333650   13479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1016 17:45:13.333669   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:45:13.333808   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.334060   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:45:13.334103   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.334354   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:45:13.334442   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:45:13.334471   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.334499   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:45:13.334802   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:45:13.334809   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:45:13.334823   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.334908   13479 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1016 17:45:13.334934   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	I1016 17:45:13.334985   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:45:13.335097   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:45:13.335254   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	I1016 17:45:13.335446   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:45:13.335478   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.335746   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:45:13.335937   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:45:13.336077   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:45:13.336347   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	I1016 17:45:13.337276   13479 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1016 17:45:13.338158   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.338551   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:45:13.338577   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.338792   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:45:13.338963   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:45:13.339107   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:45:13.339350   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	I1016 17:45:13.339575   13479 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1016 17:45:13.340816   13479 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1016 17:45:13.340832   13479 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1016 17:45:13.340846   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:45:13.345201   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.345200   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:45:13.345232   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:45:13.345249   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:13.345410   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:45:13.345558   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:45:13.345706   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	W1016 17:45:13.728661   13479 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43668->192.168.39.210:22: read: connection reset by peer
	I1016 17:45:13.728699   13479 retry.go:31] will retry after 323.352196ms: ssh: handshake failed: read tcp 192.168.39.1:43668->192.168.39.210:22: read: connection reset by peer
	I1016 17:45:13.858006   13479 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 17:45:13.858042   13479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 17:45:14.094584   13479 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:45:14.094623   13479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1016 17:45:14.221235   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1016 17:45:14.249807   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 17:45:14.289397   13479 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1016 17:45:14.289421   13479 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1016 17:45:14.310292   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1016 17:45:14.323095   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1016 17:45:14.339710   13479 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1016 17:45:14.339733   13479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1016 17:45:14.399025   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1016 17:45:14.399558   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1016 17:45:14.464618   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 17:45:14.476938   13479 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1016 17:45:14.476968   13479 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1016 17:45:14.510154   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1016 17:45:14.620340   13479 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1016 17:45:14.620370   13479 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1016 17:45:14.671111   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1016 17:45:14.743933   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:45:14.961179   13479 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1016 17:45:14.961208   13479 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1016 17:45:15.025541   13479 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1016 17:45:15.025570   13479 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1016 17:45:15.045522   13479 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1016 17:45:15.045542   13479 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1016 17:45:15.135912   13479 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1016 17:45:15.135937   13479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1016 17:45:15.216329   13479 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1016 17:45:15.216354   13479 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1016 17:45:15.343031   13479 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1016 17:45:15.343059   13479 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1016 17:45:15.376309   13479 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1016 17:45:15.376333   13479 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1016 17:45:15.410417   13479 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1016 17:45:15.410441   13479 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1016 17:45:15.438927   13479 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1016 17:45:15.438954   13479 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1016 17:45:15.509389   13479 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1016 17:45:15.509415   13479 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1016 17:45:15.513922   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1016 17:45:15.636850   13479 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1016 17:45:15.636876   13479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1016 17:45:15.654245   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1016 17:45:15.687588   13479 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1016 17:45:15.687615   13479 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1016 17:45:15.791437   13479 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1016 17:45:15.791459   13479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1016 17:45:15.987403   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1016 17:45:16.091379   13479 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1016 17:45:16.091404   13479 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1016 17:45:16.146592   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1016 17:45:16.334305   13479 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1016 17:45:16.334333   13479 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1016 17:45:16.704306   13479 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.846243098s)
	I1016 17:45:16.704330   13479 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1016 17:45:16.704353   13479 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.846314948s)
	I1016 17:45:16.705023   13479 node_ready.go:35] waiting up to 6m0s for node "addons-019580" to be "Ready" ...
	I1016 17:45:16.716850   13479 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1016 17:45:16.716886   13479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1016 17:45:16.718727   13479 node_ready.go:49] node "addons-019580" is "Ready"
	I1016 17:45:16.718749   13479 node_ready.go:38] duration metric: took 13.707219ms for node "addons-019580" to be "Ready" ...
	I1016 17:45:16.718761   13479 api_server.go:52] waiting for apiserver process to appear ...
	I1016 17:45:16.718803   13479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 17:45:16.905518   13479 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1016 17:45:16.905540   13479 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1016 17:45:17.081505   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.860227854s)
	I1016 17:45:17.081574   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:17.081592   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:17.081914   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:17.081918   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:17.081939   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:17.081948   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:17.081957   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:17.082197   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:17.082214   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:17.123294   13479 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1016 17:45:17.123318   13479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1016 17:45:17.272584   13479 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-019580" context rescaled to 1 replicas
	I1016 17:45:17.414135   13479 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1016 17:45:17.414165   13479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1016 17:45:17.995858   13479 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1016 17:45:17.995893   13479 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1016 17:45:18.427436   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1016 17:45:19.230097   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.980261112s)
	I1016 17:45:19.230171   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:19.230185   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:19.230193   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.919870633s)
	I1016 17:45:19.230219   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:19.230228   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:19.230297   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.907140053s)
	I1016 17:45:19.230351   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:19.230364   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:19.230369   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.830786941s)
	I1016 17:45:19.230397   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:19.230322   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.831259158s)
	I1016 17:45:19.230447   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.765807812s)
	I1016 17:45:19.230464   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:19.230474   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:19.230412   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:19.230431   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:19.230527   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:19.230617   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:19.230634   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:19.230661   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:19.230668   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:19.230675   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:19.230681   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:19.230811   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:19.230844   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:19.230870   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:19.230875   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:19.230878   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:19.230885   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:19.230886   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:19.230889   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:19.230893   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:19.230895   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:19.230900   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:19.230901   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:19.230904   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:19.230908   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:19.230910   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:19.230912   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:19.230918   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:19.230925   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:19.230831   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:19.230846   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:19.230853   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:19.230979   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:19.230986   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:19.231247   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:19.231349   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:19.231371   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:19.231403   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:19.231431   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:19.231460   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:19.231467   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:19.231523   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:19.231569   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:19.231576   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:19.233067   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:19.233108   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:19.233208   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:19.233235   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:19.230861   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:19.233625   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:19.233626   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:19.233657   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:19.257258   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:19.257283   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:19.257543   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:19.257563   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:20.791465   13479 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1016 17:45:20.791510   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:45:20.795146   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:20.795676   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:45:20.795770   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:20.796044   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:45:20.796268   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:45:20.796444   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:45:20.796617   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	I1016 17:45:21.003183   13479 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1016 17:45:21.128845   13479 addons.go:238] Setting addon gcp-auth=true in "addons-019580"
	I1016 17:45:21.128898   13479 host.go:66] Checking if "addons-019580" exists ...
	I1016 17:45:21.129264   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:21.129300   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:21.143022   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43463
	I1016 17:45:21.143473   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:21.143905   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:21.143926   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:21.144326   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:21.144839   13479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:45:21.144867   13479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:45:21.159669   13479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35989
	I1016 17:45:21.160244   13479 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:45:21.160769   13479 main.go:141] libmachine: Using API Version  1
	I1016 17:45:21.160807   13479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:45:21.161185   13479 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:45:21.161370   13479 main.go:141] libmachine: (addons-019580) Calling .GetState
	I1016 17:45:21.163174   13479 main.go:141] libmachine: (addons-019580) Calling .DriverName
	I1016 17:45:21.163403   13479 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1016 17:45:21.163423   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHHostname
	I1016 17:45:21.166901   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:21.167428   13479 main.go:141] libmachine: (addons-019580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ad:4e", ip: ""} in network mk-addons-019580: {Iface:virbr1 ExpiryTime:2025-10-16 18:44:44 +0000 UTC Type:0 Mac:52:54:00:d1:ad:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:addons-019580 Clientid:01:52:54:00:d1:ad:4e}
	I1016 17:45:21.167453   13479 main.go:141] libmachine: (addons-019580) DBG | domain addons-019580 has defined IP address 192.168.39.210 and MAC address 52:54:00:d1:ad:4e in network mk-addons-019580
	I1016 17:45:21.167679   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHPort
	I1016 17:45:21.167874   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHKeyPath
	I1016 17:45:21.168042   13479 main.go:141] libmachine: (addons-019580) Calling .GetSSHUsername
	I1016 17:45:21.168222   13479 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/addons-019580/id_rsa Username:docker}
	I1016 17:45:22.498607   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.988412886s)
	I1016 17:45:22.498674   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:22.498681   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.827510892s)
	I1016 17:45:22.498717   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:22.498732   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:22.498689   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:22.498765   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.75480029s)
	W1016 17:45:22.498817   13479 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:22.498851   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.844576463s)
	I1016 17:45:22.498872   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:22.498881   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:22.498891   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.511461403s)
	I1016 17:45:22.498790   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.984844539s)
	I1016 17:45:22.498939   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:22.498952   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:22.498849   13479 retry.go:31] will retry after 288.488313ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:22.498918   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:22.498997   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:22.499015   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.352394215s)
	W1016 17:45:22.499034   13479 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1016 17:45:22.499043   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:22.499038   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:22.499050   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:22.499056   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:22.499060   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:22.499066   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:22.499068   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:22.499071   13479 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.780254234s)
	I1016 17:45:22.499080   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:22.499089   13479 api_server.go:72] duration metric: took 9.337442407s to wait for apiserver process to appear ...
	I1016 17:45:22.499098   13479 api_server.go:88] waiting for apiserver healthz status ...
	I1016 17:45:22.499050   13479 retry.go:31] will retry after 281.770638ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1016 17:45:22.499112   13479 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I1016 17:45:22.499226   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:22.499254   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:22.499260   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:22.499267   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:22.499284   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:22.499297   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:22.499358   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:22.499364   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:22.499371   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:22.499376   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:22.499428   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:22.499437   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:22.499444   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:22.499450   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:22.499733   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:22.499763   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:22.499782   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:22.499789   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:22.499797   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:22.499804   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:22.499804   13479 addons.go:479] Verifying addon metrics-server=true in "addons-019580"
	I1016 17:45:22.499971   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:22.499990   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:22.499996   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:22.500053   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:22.500073   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:22.501677   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:22.501687   13479 addons.go:479] Verifying addon ingress=true in "addons-019580"
	I1016 17:45:22.501831   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:22.501845   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:22.501856   13479 addons.go:479] Verifying addon registry=true in "addons-019580"
	I1016 17:45:22.504088   13479 out.go:179] * Verifying registry addon...
	I1016 17:45:22.504090   13479 out.go:179] * Verifying ingress addon...
	I1016 17:45:22.504092   13479 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-019580 service yakd-dashboard -n yakd-dashboard
	
	I1016 17:45:22.505825   13479 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1016 17:45:22.506985   13479 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1016 17:45:22.564266   13479 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1016 17:45:22.564297   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:22.564680   13479 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1016 17:45:22.564696   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:22.565992   13479 api_server.go:279] https://192.168.39.210:8443/healthz returned 200:
	ok
	I1016 17:45:22.568457   13479 api_server.go:141] control plane version: v1.34.1
	I1016 17:45:22.568486   13479 api_server.go:131] duration metric: took 69.381213ms to wait for apiserver health ...
	I1016 17:45:22.568496   13479 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 17:45:22.624853   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:22.624873   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:22.625267   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:22.625307   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:22.625318   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:22.662333   13479 system_pods.go:59] 17 kube-system pods found
	I1016 17:45:22.662369   13479 system_pods.go:61] "amd-gpu-device-plugin-9dsld" [7b8b7737-1a02-462d-a0a0-742829716fd8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1016 17:45:22.662377   13479 system_pods.go:61] "coredns-66bc5c9577-7lqfl" [f53e5849-969b-4a3f-a7bc-6de22854cd48] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 17:45:22.662395   13479 system_pods.go:61] "coredns-66bc5c9577-bclq8" [d0e00fd9-01b4-49fc-a966-4b66cf7511b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 17:45:22.662404   13479 system_pods.go:61] "etcd-addons-019580" [ad2410a1-ca4c-4a28-bcfd-40a5f33e7aad] Running
	I1016 17:45:22.662414   13479 system_pods.go:61] "kube-apiserver-addons-019580" [7b3e854c-1f06-4213-865b-d18c58a35cc1] Running
	I1016 17:45:22.662420   13479 system_pods.go:61] "kube-controller-manager-addons-019580" [68dd21b9-2fa6-45dc-8826-98f03ad38909] Running
	I1016 17:45:22.662428   13479 system_pods.go:61] "kube-ingress-dns-minikube" [277dd7f0-484c-49a7-9288-696bb6c358fc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1016 17:45:22.662438   13479 system_pods.go:61] "kube-proxy-npsls" [9cb852db-a2c1-43ca-aec1-05e353515731] Running
	I1016 17:45:22.662444   13479 system_pods.go:61] "kube-scheduler-addons-019580" [5608c68b-930e-4de6-b4a6-b19835d4d631] Running
	I1016 17:45:22.662451   13479 system_pods.go:61] "metrics-server-85b7d694d7-n7m6f" [80bf5f5a-64bc-418a-84ab-e35b334f4a34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 17:45:22.662459   13479 system_pods.go:61] "nvidia-device-plugin-daemonset-b4mml" [822e9daf-f119-4305-a6e2-316dc27de6e8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1016 17:45:22.662466   13479 system_pods.go:61] "registry-6b586f9694-dzfmg" [8d164814-7fd0-4b56-a4ed-12771b631303] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 17:45:22.662471   13479 system_pods.go:61] "registry-creds-764b6fb674-mt69v" [4b927b8c-9932-4bdf-8d72-09e8c2abbd63] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 17:45:22.662479   13479 system_pods.go:61] "registry-proxy-f58hj" [d0a773e3-ad59-4a57-89ed-1b4a3eb52390] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1016 17:45:22.662484   13479 system_pods.go:61] "snapshot-controller-7d9fbc56b8-5xdqx" [22a2166d-ef6a-40e5-adeb-50e436974c8b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:22.662496   13479 system_pods.go:61] "snapshot-controller-7d9fbc56b8-87pnr" [2d7b7be4-103e-4c11-93a1-5aee83b8a4c3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:22.662500   13479 system_pods.go:61] "storage-provisioner" [efdb302c-4425-42dd-b594-7e6a54836850] Running
	I1016 17:45:22.662507   13479 system_pods.go:74] duration metric: took 94.005416ms to wait for pod list to return data ...
	I1016 17:45:22.662516   13479 default_sa.go:34] waiting for default service account to be created ...
	I1016 17:45:22.672335   13479 default_sa.go:45] found service account: "default"
	I1016 17:45:22.672356   13479 default_sa.go:55] duration metric: took 9.832918ms for default service account to be created ...
	I1016 17:45:22.672366   13479 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 17:45:22.684959   13479 system_pods.go:86] 17 kube-system pods found
	I1016 17:45:22.684998   13479 system_pods.go:89] "amd-gpu-device-plugin-9dsld" [7b8b7737-1a02-462d-a0a0-742829716fd8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1016 17:45:22.685009   13479 system_pods.go:89] "coredns-66bc5c9577-7lqfl" [f53e5849-969b-4a3f-a7bc-6de22854cd48] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 17:45:22.685026   13479 system_pods.go:89] "coredns-66bc5c9577-bclq8" [d0e00fd9-01b4-49fc-a966-4b66cf7511b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 17:45:22.685034   13479 system_pods.go:89] "etcd-addons-019580" [ad2410a1-ca4c-4a28-bcfd-40a5f33e7aad] Running
	I1016 17:45:22.685040   13479 system_pods.go:89] "kube-apiserver-addons-019580" [7b3e854c-1f06-4213-865b-d18c58a35cc1] Running
	I1016 17:45:22.685057   13479 system_pods.go:89] "kube-controller-manager-addons-019580" [68dd21b9-2fa6-45dc-8826-98f03ad38909] Running
	I1016 17:45:22.685080   13479 system_pods.go:89] "kube-ingress-dns-minikube" [277dd7f0-484c-49a7-9288-696bb6c358fc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1016 17:45:22.685092   13479 system_pods.go:89] "kube-proxy-npsls" [9cb852db-a2c1-43ca-aec1-05e353515731] Running
	I1016 17:45:22.685100   13479 system_pods.go:89] "kube-scheduler-addons-019580" [5608c68b-930e-4de6-b4a6-b19835d4d631] Running
	I1016 17:45:22.685109   13479 system_pods.go:89] "metrics-server-85b7d694d7-n7m6f" [80bf5f5a-64bc-418a-84ab-e35b334f4a34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 17:45:22.685145   13479 system_pods.go:89] "nvidia-device-plugin-daemonset-b4mml" [822e9daf-f119-4305-a6e2-316dc27de6e8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1016 17:45:22.685159   13479 system_pods.go:89] "registry-6b586f9694-dzfmg" [8d164814-7fd0-4b56-a4ed-12771b631303] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 17:45:22.685167   13479 system_pods.go:89] "registry-creds-764b6fb674-mt69v" [4b927b8c-9932-4bdf-8d72-09e8c2abbd63] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 17:45:22.685175   13479 system_pods.go:89] "registry-proxy-f58hj" [d0a773e3-ad59-4a57-89ed-1b4a3eb52390] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1016 17:45:22.685183   13479 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5xdqx" [22a2166d-ef6a-40e5-adeb-50e436974c8b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:22.685192   13479 system_pods.go:89] "snapshot-controller-7d9fbc56b8-87pnr" [2d7b7be4-103e-4c11-93a1-5aee83b8a4c3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:22.685200   13479 system_pods.go:89] "storage-provisioner" [efdb302c-4425-42dd-b594-7e6a54836850] Running
	I1016 17:45:22.685210   13479 system_pods.go:126] duration metric: took 12.83691ms to wait for k8s-apps to be running ...
	I1016 17:45:22.685223   13479 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 17:45:22.685349   13479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 17:45:22.781877   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1016 17:45:22.788624   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:45:23.043513   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:23.043579   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:23.217508   13479 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.054080302s)
	I1016 17:45:23.217576   13479 system_svc.go:56] duration metric: took 532.349415ms WaitForService to wait for kubelet
	I1016 17:45:23.217590   13479 kubeadm.go:586] duration metric: took 10.055941916s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 17:45:23.217620   13479 node_conditions.go:102] verifying NodePressure condition ...
	I1016 17:45:23.218029   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.790542435s)
	I1016 17:45:23.218087   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:23.218108   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:23.218368   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:23.218385   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:23.218394   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:23.218400   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:23.218663   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:23.218679   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:23.218689   13479 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-019580"
	I1016 17:45:23.218969   13479 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1016 17:45:23.220462   13479 out.go:179] * Verifying csi-hostpath-driver addon...
	I1016 17:45:23.221892   13479 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1016 17:45:23.222529   13479 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1016 17:45:23.223214   13479 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1016 17:45:23.223234   13479 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1016 17:45:23.236132   13479 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1016 17:45:23.236153   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:23.236436   13479 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1016 17:45:23.236455   13479 node_conditions.go:123] node cpu capacity is 2
	I1016 17:45:23.236465   13479 node_conditions.go:105] duration metric: took 18.84047ms to run NodePressure ...
	I1016 17:45:23.236476   13479 start.go:241] waiting for startup goroutines ...
	I1016 17:45:23.386133   13479 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1016 17:45:23.386155   13479 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1016 17:45:23.516679   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:23.520532   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:23.586936   13479 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1016 17:45:23.586956   13479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1016 17:45:23.700106   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1016 17:45:23.727675   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:24.034435   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:24.034809   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:24.231787   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:24.514340   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:24.520592   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:24.731217   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:25.020304   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:25.029604   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:25.241197   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:25.412083   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.630148406s)
	I1016 17:45:25.412157   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:25.412172   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:25.412441   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:25.412523   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:25.412549   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:25.412563   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:25.412574   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:25.412828   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:25.412857   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:25.412861   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:25.532856   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:25.532947   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:25.728006   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:25.968015   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.179344973s)
	W1016 17:45:25.968063   13479 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:25.968088   13479 retry.go:31] will retry after 526.461498ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:25.968133   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.267967306s)
	I1016 17:45:25.968187   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:25.968204   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:25.968485   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:25.968501   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:25.968509   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:45:25.968514   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:45:25.968517   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:45:25.968823   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:45:25.968838   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:45:25.969776   13479 addons.go:479] Verifying addon gcp-auth=true in "addons-019580"
	I1016 17:45:25.971201   13479 out.go:179] * Verifying gcp-auth addon...
	I1016 17:45:25.972971   13479 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1016 17:45:25.992081   13479 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1016 17:45:25.992106   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:26.013851   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:26.014320   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:26.233484   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:26.481065   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:26.495099   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:45:26.583204   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:26.583255   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:26.730204   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:26.980508   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:27.012026   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:27.012654   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:27.230798   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:27.477937   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:27.516414   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:27.520235   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:27.678793   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.183649027s)
	W1016 17:45:27.678837   13479 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:27.678856   13479 retry.go:31] will retry after 316.909632ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:27.731305   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:27.979237   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:27.996303   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:45:28.017271   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:28.017860   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:28.228742   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:28.477952   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:28.514381   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:28.514618   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:28.729214   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 17:45:28.879432   13479 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:28.879469   13479 retry.go:31] will retry after 771.944878ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:28.977538   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:29.012309   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:29.013275   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:29.233307   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:29.477237   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:29.511799   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:29.512964   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:29.652234   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:45:29.726938   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:29.988402   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:30.010013   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:30.013740   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:30.226867   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:30.478656   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:30.580754   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:30.581002   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1016 17:45:30.607554   13479 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:30.607587   13479 retry.go:31] will retry after 896.307308ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:30.726611   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:30.977291   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:31.010369   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:31.011643   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:31.226233   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:31.476723   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:31.504896   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:45:31.510406   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:31.510698   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:31.727147   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:31.977295   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:32.009382   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:32.011909   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1016 17:45:32.190959   13479 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:32.191010   13479 retry.go:31] will retry after 2.054779242s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:32.226446   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:32.477354   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:32.511727   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:32.512160   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:32.726829   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:32.976987   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:33.009726   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:33.010923   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:33.226532   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:33.476322   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:33.509272   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:33.511188   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:33.727216   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:33.979306   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:34.013687   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:34.013925   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:34.226374   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:34.246373   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:45:34.479758   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:34.513500   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:34.513690   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:34.726348   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:34.979809   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:35.012295   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:35.013099   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:35.227886   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:35.480191   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:35.485790   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.239375252s)
	W1016 17:45:35.485838   13479 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:35.485860   13479 retry.go:31] will retry after 2.69097542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:35.509426   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:35.513511   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:35.730235   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:35.980457   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:36.020027   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:36.020279   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:36.228571   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:36.477203   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:36.511204   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:36.513997   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:36.727708   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:36.979134   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:37.010354   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:37.013399   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:37.229645   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:37.477583   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:37.511378   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:37.511544   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:37.730458   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:37.976533   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:38.014514   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:38.015357   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:38.177596   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:45:38.231975   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:38.479274   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:38.512152   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:38.512390   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:38.727594   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:38.978930   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:39.010968   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:39.011483   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:39.228506   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:39.312492   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.13485556s)
	W1016 17:45:39.312538   13479 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:39.312560   13479 retry.go:31] will retry after 2.884237062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:39.478149   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:39.510976   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:39.511882   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:39.758427   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:39.979498   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:40.012053   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:40.014287   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:40.228588   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:40.479819   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:40.516481   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:40.516495   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:40.727506   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:40.980511   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:41.013919   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:41.014415   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:41.228238   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:41.712808   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:41.712863   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:41.713292   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:41.726731   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:41.988085   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:42.019994   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:42.020020   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:42.197320   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:45:42.228093   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:42.478529   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:42.512703   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:42.514347   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:42.729195   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:42.977399   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:43.014256   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:43.014356   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:43.230785   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:43.296539   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.099176214s)
	W1016 17:45:43.296584   13479 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:43.296607   13479 retry.go:31] will retry after 4.161164826s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:43.486748   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:43.513109   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:43.513314   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:43.727981   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:44.193332   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:44.193602   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:44.196212   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:44.291290   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:44.478061   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:44.509555   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:44.512176   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:44.728339   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:44.980682   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:45.011179   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:45.015430   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:45.225922   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:45.476023   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:45.509143   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:45.511049   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:45.727987   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:45.977496   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:46.010082   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:46.012727   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:46.227514   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:46.477155   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:46.508960   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:46.510167   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:46.729589   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:46.979661   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:47.012091   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:47.012216   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:47.228012   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:47.458172   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:45:47.480966   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:47.510368   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:47.514967   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:47.728642   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:47.977454   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:48.014349   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:48.016299   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:48.240152   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 17:45:48.301286   13479 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:48.301318   13479 retry.go:31] will retry after 14.17290476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:48.476957   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:48.509520   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:48.510230   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:48.726598   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:48.976141   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:49.010458   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:49.011951   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:49.227515   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:49.480849   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:49.512035   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:49.512923   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:49.727332   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:49.977088   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:50.010321   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:50.011051   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:50.230955   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:50.747000   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:50.752216   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:50.752250   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:50.752373   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:50.976973   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:51.009679   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:51.011187   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:51.227517   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:51.477066   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:51.510631   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:51.511034   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:51.727002   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:51.975404   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:52.010066   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:52.010843   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:52.226631   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:52.477100   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:52.509732   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:52.510751   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:52.726742   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:52.977772   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:53.016716   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:53.017402   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:53.229571   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:53.477501   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:53.510645   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:53.511858   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:53.726752   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:53.978220   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:54.012030   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:54.013247   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:54.229213   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:54.475828   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:54.510613   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:54.512472   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:54.727503   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:54.980587   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:55.011805   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:55.013956   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:55.228533   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:55.478879   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:55.510290   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:55.510314   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:55.727318   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:55.976250   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:56.010406   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:56.013717   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:56.580554   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:56.584820   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:56.586964   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:56.587177   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:56.729043   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:56.982751   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:57.011455   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:57.012104   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:57.227236   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:57.478836   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:57.510137   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:57.511631   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:57.726438   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:57.976608   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:58.012152   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:58.014501   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:58.227638   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:58.549700   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:58.551845   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:58.553862   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:58.813366   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:58.978209   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:59.011821   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:59.012519   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:59.227512   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:59.477100   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:59.510305   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:59.511643   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:59.728604   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:59.976701   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:00.010325   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:00.011245   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:00.227395   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:00.478282   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:00.509593   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:00.511881   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:00.736815   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:00.984292   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:01.011852   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:01.017488   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:01.230902   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:01.476626   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:01.512345   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:01.513580   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:01.729959   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:01.977059   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:02.016139   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:02.017502   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:02.227069   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:02.474361   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:46:02.477686   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:02.516474   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:02.525425   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:02.726954   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:02.981022   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:03.015202   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:03.015944   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:03.227439   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 17:46:03.330222   13479 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:46:03.330260   13479 retry.go:31] will retry after 15.34062788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:46:03.477690   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:03.577719   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:03.577760   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:03.725692   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:03.977058   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:04.011281   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:04.011710   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:04.226624   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:04.478463   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:04.509947   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:04.510534   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:04.726577   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:04.982928   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:05.014389   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:05.015491   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:05.229901   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:05.478001   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:05.512914   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:05.512991   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:05.726390   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:05.977433   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:06.012058   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:06.012862   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:06.229038   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:06.476162   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:06.510676   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:06.511253   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:06.727175   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:06.976814   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:07.009693   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:07.011438   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:07.226236   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:07.478057   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:07.510145   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:07.512245   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:07.728269   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:07.977678   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:08.010627   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:08.010663   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:08.227178   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:08.477632   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:08.511732   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:08.515554   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:08.729438   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:08.985210   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:09.013324   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:09.013427   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:09.231703   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:09.481683   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:09.512265   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:09.516630   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:09.729656   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:09.978110   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:10.013622   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:46:10.014295   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:10.229263   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:10.476998   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:10.512706   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:10.514175   13479 kapi.go:107] duration metric: took 48.008348343s to wait for kubernetes.io/minikube-addons=registry ...
	I1016 17:46:10.727577   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:11.004346   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:11.010063   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:11.228026   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:11.477194   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:11.514570   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:11.729435   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:11.977623   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:12.010698   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:12.227110   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:12.478535   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:12.510613   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:12.726285   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:12.978260   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:13.012013   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:13.236026   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:13.476882   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:13.511579   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:13.731833   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:13.980894   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:14.010875   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:14.228836   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:14.477477   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:14.513648   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:14.731820   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:14.977703   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:15.011604   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:15.227663   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:15.477780   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:15.510973   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:15.726184   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:15.975926   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:16.010693   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:16.226545   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:16.476769   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:16.511343   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:16.728380   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:16.981883   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:17.011191   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:17.228019   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:17.478839   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:17.511928   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:17.726616   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:17.976712   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:18.011060   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:18.226374   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:18.478035   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:18.513827   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:18.671977   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:46:18.743761   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:18.979312   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:19.011863   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:19.302098   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:19.480798   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:19.513450   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:19.733804   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:19.841667   13479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.169654872s)
	W1016 17:46:19.841709   13479 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:46:19.841730   13479 retry.go:31] will retry after 26.502769636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:46:19.977630   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:20.010900   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:20.226967   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:20.476744   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:20.511875   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:20.726047   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:20.975467   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:21.012009   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:21.226701   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:21.478216   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:21.510639   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:21.728549   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:21.978214   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:22.010804   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:22.226139   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:22.477747   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:22.514450   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:22.730193   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:22.978134   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:23.011559   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:23.228224   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:23.477272   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:23.511327   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:23.726407   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:23.979661   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:24.010838   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:24.230864   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:24.476892   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:24.513528   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:24.783970   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:24.977853   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:25.011454   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:25.227377   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:25.479972   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:25.510869   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:25.726879   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:25.981202   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:26.012214   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:26.234021   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:26.476323   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:26.510408   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:26.729493   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:26.978172   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:27.012779   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:27.226391   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:27.603542   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:27.604248   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:27.728330   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:27.989244   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:28.087626   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:28.227585   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:28.477896   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:28.513376   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:28.727853   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:28.982652   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:29.013079   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:29.602181   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:29.602269   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:29.602794   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:29.726905   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:29.976157   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:30.012065   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:30.227250   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:30.478644   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:30.511418   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:30.730044   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:30.977812   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:31.012641   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:31.232007   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:31.476295   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:31.510351   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:31.732251   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:31.991184   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:32.013133   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:32.226186   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:32.480902   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:32.581337   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:32.726625   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:32.980294   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:33.015990   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:33.229300   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:33.479797   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:33.511348   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:33.727959   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:34.155163   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:34.155617   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:34.226495   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:34.479857   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:34.513251   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:34.727787   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:34.986574   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:35.012416   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:35.306158   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:35.477821   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:35.580080   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:35.727284   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:35.982185   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:36.014954   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:36.229730   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:36.477279   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:36.510469   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:36.728069   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:37.115779   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:37.115984   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:37.231834   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:37.478160   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:37.511179   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:37.732531   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:37.976956   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:38.015379   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:38.227507   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:38.477148   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:38.512442   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:38.726904   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:46:38.977244   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:39.010429   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:39.227624   13479 kapi.go:107] duration metric: took 1m16.005091166s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1016 17:46:39.477087   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:39.510604   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:39.976963   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:40.010962   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:40.476229   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:40.511205   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:40.976555   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:41.011229   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:41.477065   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:41.510653   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:41.977587   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:42.010973   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:42.476725   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:42.511081   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:42.975988   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:43.044718   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:43.476981   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:43.511326   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:43.977557   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:44.010658   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:44.476890   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:44.511767   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:44.978206   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:45.011605   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:45.477041   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:45.511946   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:45.976328   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:46.010776   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:46.344933   13479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:46:46.476797   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:46.511154   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:46.976793   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:47.010969   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1016 17:46:47.073701   13479 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:46:47.073764   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:46:47.073773   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:46:47.074057   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:46:47.074074   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:46:47.074083   13479 main.go:141] libmachine: Making call to close driver server
	I1016 17:46:47.074090   13479 main.go:141] libmachine: (addons-019580) Calling .Close
	I1016 17:46:47.074109   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	I1016 17:46:47.074356   13479 main.go:141] libmachine: Successfully made call to close driver server
	I1016 17:46:47.074378   13479 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 17:46:47.074359   13479 main.go:141] libmachine: (addons-019580) DBG | Closing plugin on server side
	W1016 17:46:47.074473   13479 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1016 17:46:47.476420   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:47.511282   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:47.977687   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:48.010872   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:48.476042   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:48.511412   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:48.976774   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:49.010931   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:49.476808   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:49.511235   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:49.977613   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:50.010459   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:50.477096   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:50.510415   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:50.976894   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:51.010655   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:51.477786   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:51.511202   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:51.976873   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:52.011616   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:52.477187   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:52.510496   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:52.976061   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:53.010445   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:53.475930   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:53.511644   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:53.978136   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:54.010702   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:54.476040   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:54.511282   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:54.976968   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:55.013272   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:55.476080   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:55.510269   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:55.976870   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:56.011107   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:56.476503   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:56.510784   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:56.976353   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:57.010432   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:57.476672   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:57.510614   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:57.976913   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:58.011184   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:58.476736   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:58.511578   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:58.977359   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:59.010674   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:59.477525   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:46:59.511757   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:46:59.977285   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:00.010638   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:00.477243   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:00.510817   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:00.975720   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:01.011367   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:01.477110   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:01.510099   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:01.976451   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:02.010936   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:02.476468   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:02.510980   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:02.976473   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:03.010620   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:03.476525   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:03.510698   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:03.976068   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:04.010336   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:04.476562   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:04.510889   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:04.977059   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:05.010574   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:05.477730   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:05.510863   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:05.976091   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:06.010857   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:06.476095   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:06.510310   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:06.977442   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:07.011042   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:07.476722   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:07.510882   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:07.977713   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:08.012196   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:08.476468   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:08.511035   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:08.976934   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:09.011075   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:09.476524   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:09.511175   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:09.976956   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:10.011845   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:10.476333   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:10.510963   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:10.977155   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:11.010625   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:11.475894   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:11.511450   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:11.976850   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:12.011322   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:12.476681   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:12.512075   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:12.975876   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:13.011468   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:13.476375   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:13.511206   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:13.976941   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:14.011396   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:14.477132   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:14.510581   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:14.977360   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:15.010453   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:15.477104   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:15.510212   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:15.976825   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:16.011278   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:16.476460   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:16.510656   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:16.976313   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:17.011059   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:17.476592   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:17.510973   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:17.976857   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:18.010666   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:18.477682   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:18.512257   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:18.976492   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:19.011403   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:19.477163   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:19.509932   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:19.976280   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:20.077728   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:20.477746   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:20.511305   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:20.977224   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:21.011425   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:21.477220   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:21.510589   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:21.976732   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:22.011617   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:22.476830   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:22.511429   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:22.976261   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:23.011737   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:23.476437   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:23.510924   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:23.976278   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:24.011238   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:24.477107   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:24.511456   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:24.977306   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:25.011222   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:25.476931   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:25.511675   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:25.975954   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:26.011869   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:26.476190   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:26.511665   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:26.976469   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:27.011063   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:27.476381   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:27.510843   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:27.976743   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:28.010807   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:28.476439   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:28.511228   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:28.976346   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:29.010807   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:29.476465   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:29.510927   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:29.976257   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:30.010818   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:30.476808   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:30.511579   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:30.977719   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:31.011083   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:31.476557   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:31.511102   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:31.975694   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:32.010994   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:32.476734   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:32.511341   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:32.975903   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:33.010834   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:33.476379   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:33.511176   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:33.975955   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:34.011179   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:34.477095   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:34.510656   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:34.977269   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:35.010428   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:35.476756   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:35.511251   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:35.976925   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:36.010718   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:36.476361   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:36.511051   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:36.976602   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:37.010911   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:37.476289   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:37.510389   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:37.976640   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:38.011339   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:38.476954   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:38.511317   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:38.977419   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:39.012468   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:39.477445   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:39.511346   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:39.977134   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:40.011439   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:40.479525   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:40.579219   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:40.978608   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:41.012685   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:41.478849   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:41.513678   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:41.978717   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:42.015023   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:42.478275   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:42.515184   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:42.986035   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:43.017003   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:43.479516   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:43.512363   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:43.979469   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:44.012025   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:44.479703   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:44.514454   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:44.980426   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:45.012880   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:45.476713   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:45.510794   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:45.982586   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:46.010631   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:46.480948   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:46.513105   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:46.981796   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:47.012777   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:47.478480   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:47.579146   13479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:47:47.977040   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:48.010341   13479 kapi.go:107] duration metric: took 2m25.503353167s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1016 17:47:48.477772   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:48.976024   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:49.477177   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:49.980051   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:50.478991   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:50.977933   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:51.476168   13479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:47:51.976294   13479 kapi.go:107] duration metric: took 2m26.003323573s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1016 17:47:51.977995   13479 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-019580 cluster.
	I1016 17:47:51.979199   13479 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1016 17:47:51.980390   13479 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1016 17:47:51.981623   13479 out.go:179] * Enabled addons: cloud-spanner, nvidia-device-plugin, registry-creds, ingress-dns, amd-gpu-device-plugin, storage-provisioner, default-storageclass, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1016 17:47:51.982798   13479 addons.go:514] duration metric: took 2m38.821116456s for enable addons: enabled=[cloud-spanner nvidia-device-plugin registry-creds ingress-dns amd-gpu-device-plugin storage-provisioner default-storageclass metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1016 17:47:51.982832   13479 start.go:246] waiting for cluster config update ...
	I1016 17:47:51.982848   13479 start.go:255] writing updated cluster config ...
	I1016 17:47:51.983075   13479 ssh_runner.go:195] Run: rm -f paused
	I1016 17:47:51.988959   13479 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 17:47:51.992327   13479 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bclq8" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:47:51.998843   13479 pod_ready.go:94] pod "coredns-66bc5c9577-bclq8" is "Ready"
	I1016 17:47:51.998866   13479 pod_ready.go:86] duration metric: took 6.516498ms for pod "coredns-66bc5c9577-bclq8" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:47:52.001225   13479 pod_ready.go:83] waiting for pod "etcd-addons-019580" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:47:52.005873   13479 pod_ready.go:94] pod "etcd-addons-019580" is "Ready"
	I1016 17:47:52.005894   13479 pod_ready.go:86] duration metric: took 4.650322ms for pod "etcd-addons-019580" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:47:52.009941   13479 pod_ready.go:83] waiting for pod "kube-apiserver-addons-019580" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:47:52.015797   13479 pod_ready.go:94] pod "kube-apiserver-addons-019580" is "Ready"
	I1016 17:47:52.015816   13479 pod_ready.go:86] duration metric: took 5.857268ms for pod "kube-apiserver-addons-019580" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:47:52.077424   13479 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-019580" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:47:52.393443   13479 pod_ready.go:94] pod "kube-controller-manager-addons-019580" is "Ready"
	I1016 17:47:52.393470   13479 pod_ready.go:86] duration metric: took 316.015855ms for pod "kube-controller-manager-addons-019580" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:47:52.593358   13479 pod_ready.go:83] waiting for pod "kube-proxy-npsls" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:47:52.992554   13479 pod_ready.go:94] pod "kube-proxy-npsls" is "Ready"
	I1016 17:47:52.992582   13479 pod_ready.go:86] duration metric: took 399.201764ms for pod "kube-proxy-npsls" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:47:53.192697   13479 pod_ready.go:83] waiting for pod "kube-scheduler-addons-019580" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:47:53.594765   13479 pod_ready.go:94] pod "kube-scheduler-addons-019580" is "Ready"
	I1016 17:47:53.594795   13479 pod_ready.go:86] duration metric: took 402.072525ms for pod "kube-scheduler-addons-019580" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:47:53.594810   13479 pod_ready.go:40] duration metric: took 1.605825074s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 17:47:53.639896   13479 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1016 17:47:53.641698   13479 out.go:179] * Done! kubectl is now configured to use "addons-019580" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.556747934Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=30b590f1-d947-43fd-a6fd-0ad31aabcaa2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.557131883Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29321141730a051690abcaf96a324160fe7f2e39c31e7cc16ac3659151578b08,PodSandboxId:d20b036b012034b87696055cef309bf78a32142144b4a319106f8c2d9e786b00,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760636906389732509,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d772ea5-6e5c-457a-a18c-fd5017516390,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8432bd68eaec68b008888a3f15042a402594ae5c7db25eb4be6daed245b2c50c,PodSandboxId:6910931def666b54eca41898e28cf4aab2abec81d3bb6ec19fb9bca888c84efd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760636877852501903,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d04de663-3415-49fc-8de0-6c2bcb2781c1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a83ae4a84e0766bfbe45990bba2dfdc2adca1921267f8eaabd5938602a2a2ff0,PodSandboxId:02e2d18ecffeb68da4d8800c039bc89fb440b20090e46e743b4143c4deddd515,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760636867417403929,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-rhrxq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1f3aed82-a02e-4f79-b867-a0110167ce6c,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c48396e46409bdab6e9131de9de3ab2381293a63c05dd244acad92ef9fab9fe6,PodSandboxId:580e68211b71239312f77f819dc904a2142d381abe52c3402a2618fff5cc0cb3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760636787803143271,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h4c46,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d38d377b-b728-4a5c-bb80-7e82d5f097f7,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c53a3cd0331029eb0b4e7c4bf44afa86b4919086c3f0df9689c247469a6f812e,PodSandboxId:76440e4561638cb3910fc5d4f772c6acf42eb2500171a405ea39bb493d99fb24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760636786306953506,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-648wf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 74ca283b-2124-4219-adc2-2019c569c952,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641ef48c6334d8dff9f6202451d91f94e74b0cde84ff144710c1e9da7c8e6c74,PodSandboxId:a26fc2fc65d8254704e1b74eb09606a9ba779cd19acd298de8d074a1367a4254,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760636775090915912,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-q89mj,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: d5744fd9-0134-43c6-8f05-f31e6045d33a,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b857dfa9e72f862930c918442e5d3c2ebb58d3445ce68222bd83a814858cc09e,PodSandboxId:9393ec65aeaa85f76474e00bb65130d9d4a3a688bd93b20401239945361ee4e6,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760636760057098446,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 277dd7f0-484c-49a7-9288-696bb6c358fc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccec12606b6cbbc60f88f0f1329b04d44a0063cc3698b7e275b8583df81dbafe,PodSandboxId:1546312522bb2447591d9b5d7dd712129c5284d04d12445
f08a158d90788550a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760636750834280552,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9dsld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b8b7737-1a02-462d-a0a0-742829716fd8,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b98ed62270e2be0e51ff399836d5d81848498ba5965a79fee0f41e99a17ecd3,PodSandboxId:fb34634
1d7c388ecf1351fa5eb3adc8067d3a11915e8612a2cd1b6476c105368,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760636720172008526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efdb302c-4425-42dd-b594-7e6a54836850,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb9eee15cdd641392160d66491f04a8c739137d0a7a0ace22641906bd6df852,PodSandboxId:e67d58e116f251399bf
bee3aa22582f67ae5321c9528e8019d9c854344985025,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760636714487777458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bclq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e00fd9-01b4-49fc-a966-4b66cf7511b9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab379979b815073927b1ee1eb8c5f66519846cef0a604e200dd924314f0281ae,PodSandboxId:d36c6ca1aea23dcebf31d455986aa5167dc7999a23d13d52e11ae86b1c824d15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760636713844892987,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-npsls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb852db-a2c1-43ca-aec1-05e353515731,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7bff05edf6af8a6e182bfefe2862b836118b91da6de4bdc200c523661a360d8,PodSandboxId:84c6c3ef92e9236f5a3218014904da9dea41ad1081a4bf28feb60e6c1e56baf7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760636701775657081,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-019580,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ebb29acd6b80239edbb1fc6d9b00683,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.k
ubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c177a055524087465b344b0845bcb679706c51b7a09387f24d20258d91fe4cef,PodSandboxId:6f15e6f7fedd6c6435faf9743361f12da9940cd643447c5cbe005222f36366a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760636701811009705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-019580,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56
c01f9b6f4e047ed243275a9d18f377,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d08c9206e01edaf0f19dcfbc386ed1f07389f3858cbd3743f2b315c82a2cb1ff,PodSandboxId:0b730c3eb5030d8c27b25e0defa2ce2b534f678b4b0f27a4682a4cb9dba44dbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760636701763029977,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes
.pod.name: kube-apiserver-addons-019580,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c40222361a51f2d447ed6184675afe4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89473df087feb8e67c2145c57ee4923255435ad5936e1e4b6fdb998a89b70f83,PodSandboxId:49a045450381e96f6468b5e862f15931927815d478087934f5bee83beb23f763,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:176063670
1724921723,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-019580,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23e9f2744b5657e1b81e6dd400d61812,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=30b590f1-d947-43fd-a6fd-0ad31aabcaa2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.601532761Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69d08cde-bb9d-4ac9-b997-1ee6de5ad368 name=/runtime.v1.RuntimeService/Version
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.601872396Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69d08cde-bb9d-4ac9-b997-1ee6de5ad368 name=/runtime.v1.RuntimeService/Version
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.603505256Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e79113f-3cc6-4fea-b12e-7a7e383a4870 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.605631253Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760637048605598703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e79113f-3cc6-4fea-b12e-7a7e383a4870 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.606387592Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07bf196e-55f8-4790-923b-bf6dd83a96a8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.606466127Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07bf196e-55f8-4790-923b-bf6dd83a96a8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.606774441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29321141730a051690abcaf96a324160fe7f2e39c31e7cc16ac3659151578b08,PodSandboxId:d20b036b012034b87696055cef309bf78a32142144b4a319106f8c2d9e786b00,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760636906389732509,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d772ea5-6e5c-457a-a18c-fd5017516390,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8432bd68eaec68b008888a3f15042a402594ae5c7db25eb4be6daed245b2c50c,PodSandboxId:6910931def666b54eca41898e28cf4aab2abec81d3bb6ec19fb9bca888c84efd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760636877852501903,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d04de663-3415-49fc-8de0-6c2bcb2781c1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a83ae4a84e0766bfbe45990bba2dfdc2adca1921267f8eaabd5938602a2a2ff0,PodSandboxId:02e2d18ecffeb68da4d8800c039bc89fb440b20090e46e743b4143c4deddd515,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760636867417403929,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-rhrxq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1f3aed82-a02e-4f79-b867-a0110167ce6c,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c48396e46409bdab6e9131de9de3ab2381293a63c05dd244acad92ef9fab9fe6,PodSandboxId:580e68211b71239312f77f819dc904a2142d381abe52c3402a2618fff5cc0cb3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760636787803143271,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h4c46,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d38d377b-b728-4a5c-bb80-7e82d5f097f7,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c53a3cd0331029eb0b4e7c4bf44afa86b4919086c3f0df9689c247469a6f812e,PodSandboxId:76440e4561638cb3910fc5d4f772c6acf42eb2500171a405ea39bb493d99fb24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760636786306953506,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-648wf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 74ca283b-2124-4219-adc2-2019c569c952,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641ef48c6334d8dff9f6202451d91f94e74b0cde84ff144710c1e9da7c8e6c74,PodSandboxId:a26fc2fc65d8254704e1b74eb09606a9ba779cd19acd298de8d074a1367a4254,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760636775090915912,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-q89mj,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: d5744fd9-0134-43c6-8f05-f31e6045d33a,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b857dfa9e72f862930c918442e5d3c2ebb58d3445ce68222bd83a814858cc09e,PodSandboxId:9393ec65aeaa85f76474e00bb65130d9d4a3a688bd93b20401239945361ee4e6,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760636760057098446,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 277dd7f0-484c-49a7-9288-696bb6c358fc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccec12606b6cbbc60f88f0f1329b04d44a0063cc3698b7e275b8583df81dbafe,PodSandboxId:1546312522bb2447591d9b5d7dd712129c5284d04d12445
f08a158d90788550a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760636750834280552,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9dsld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b8b7737-1a02-462d-a0a0-742829716fd8,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b98ed62270e2be0e51ff399836d5d81848498ba5965a79fee0f41e99a17ecd3,PodSandboxId:fb34634
1d7c388ecf1351fa5eb3adc8067d3a11915e8612a2cd1b6476c105368,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760636720172008526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efdb302c-4425-42dd-b594-7e6a54836850,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb9eee15cdd641392160d66491f04a8c739137d0a7a0ace22641906bd6df852,PodSandboxId:e67d58e116f251399bf
bee3aa22582f67ae5321c9528e8019d9c854344985025,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760636714487777458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bclq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e00fd9-01b4-49fc-a966-4b66cf7511b9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab379979b815073927b1ee1eb8c5f66519846cef0a604e200dd924314f0281ae,PodSandboxId:d36c6ca1aea23dcebf31d455986aa5167dc7999a23d13d52e11ae86b1c824d15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760636713844892987,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-npsls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb852db-a2c1-43ca-aec1-05e353515731,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7bff05edf6af8a6e182bfefe2862b836118b91da6de4bdc200c523661a360d8,PodSandboxId:84c6c3ef92e9236f5a3218014904da9dea41ad1081a4bf28feb60e6c1e56baf7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760636701775657081,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-019580,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ebb29acd6b80239edbb1fc6d9b00683,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.k
ubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c177a055524087465b344b0845bcb679706c51b7a09387f24d20258d91fe4cef,PodSandboxId:6f15e6f7fedd6c6435faf9743361f12da9940cd643447c5cbe005222f36366a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760636701811009705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-019580,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56
c01f9b6f4e047ed243275a9d18f377,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d08c9206e01edaf0f19dcfbc386ed1f07389f3858cbd3743f2b315c82a2cb1ff,PodSandboxId:0b730c3eb5030d8c27b25e0defa2ce2b534f678b4b0f27a4682a4cb9dba44dbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760636701763029977,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes
.pod.name: kube-apiserver-addons-019580,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c40222361a51f2d447ed6184675afe4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89473df087feb8e67c2145c57ee4923255435ad5936e1e4b6fdb998a89b70f83,PodSandboxId:49a045450381e96f6468b5e862f15931927815d478087934f5bee83beb23f763,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:176063670
1724921723,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-019580,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23e9f2744b5657e1b81e6dd400d61812,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07bf196e-55f8-4790-923b-bf6dd83a96a8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.643002815Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c58527f7-4bb3-40f6-bf92-c03bb83f5882 name=/runtime.v1.RuntimeService/Version
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.643129962Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c58527f7-4bb3-40f6-bf92-c03bb83f5882 name=/runtime.v1.RuntimeService/Version
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.644261315Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=88c9cbb7-c772-4e68-81eb-5cc19d31568d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.645557858Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760637048645527801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=88c9cbb7-c772-4e68-81eb-5cc19d31568d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.646194090Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebeeb959-387b-407a-b637-4d07cbb03953 name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.646311782Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebeeb959-387b-407a-b637-4d07cbb03953 name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.646879721Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29321141730a051690abcaf96a324160fe7f2e39c31e7cc16ac3659151578b08,PodSandboxId:d20b036b012034b87696055cef309bf78a32142144b4a319106f8c2d9e786b00,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760636906389732509,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d772ea5-6e5c-457a-a18c-fd5017516390,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8432bd68eaec68b008888a3f15042a402594ae5c7db25eb4be6daed245b2c50c,PodSandboxId:6910931def666b54eca41898e28cf4aab2abec81d3bb6ec19fb9bca888c84efd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760636877852501903,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d04de663-3415-49fc-8de0-6c2bcb2781c1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a83ae4a84e0766bfbe45990bba2dfdc2adca1921267f8eaabd5938602a2a2ff0,PodSandboxId:02e2d18ecffeb68da4d8800c039bc89fb440b20090e46e743b4143c4deddd515,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760636867417403929,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-rhrxq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1f3aed82-a02e-4f79-b867-a0110167ce6c,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c48396e46409bdab6e9131de9de3ab2381293a63c05dd244acad92ef9fab9fe6,PodSandboxId:580e68211b71239312f77f819dc904a2142d381abe52c3402a2618fff5cc0cb3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760636787803143271,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h4c46,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d38d377b-b728-4a5c-bb80-7e82d5f097f7,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c53a3cd0331029eb0b4e7c4bf44afa86b4919086c3f0df9689c247469a6f812e,PodSandboxId:76440e4561638cb3910fc5d4f772c6acf42eb2500171a405ea39bb493d99fb24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760636786306953506,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-648wf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 74ca283b-2124-4219-adc2-2019c569c952,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641ef48c6334d8dff9f6202451d91f94e74b0cde84ff144710c1e9da7c8e6c74,PodSandboxId:a26fc2fc65d8254704e1b74eb09606a9ba779cd19acd298de8d074a1367a4254,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760636775090915912,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-q89mj,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: d5744fd9-0134-43c6-8f05-f31e6045d33a,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b857dfa9e72f862930c918442e5d3c2ebb58d3445ce68222bd83a814858cc09e,PodSandboxId:9393ec65aeaa85f76474e00bb65130d9d4a3a688bd93b20401239945361ee4e6,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760636760057098446,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 277dd7f0-484c-49a7-9288-696bb6c358fc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccec12606b6cbbc60f88f0f1329b04d44a0063cc3698b7e275b8583df81dbafe,PodSandboxId:1546312522bb2447591d9b5d7dd712129c5284d04d12445
f08a158d90788550a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760636750834280552,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9dsld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b8b7737-1a02-462d-a0a0-742829716fd8,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b98ed62270e2be0e51ff399836d5d81848498ba5965a79fee0f41e99a17ecd3,PodSandboxId:fb34634
1d7c388ecf1351fa5eb3adc8067d3a11915e8612a2cd1b6476c105368,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760636720172008526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efdb302c-4425-42dd-b594-7e6a54836850,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb9eee15cdd641392160d66491f04a8c739137d0a7a0ace22641906bd6df852,PodSandboxId:e67d58e116f251399bf
bee3aa22582f67ae5321c9528e8019d9c854344985025,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760636714487777458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bclq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e00fd9-01b4-49fc-a966-4b66cf7511b9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab379979b815073927b1ee1eb8c5f66519846cef0a604e200dd924314f0281ae,PodSandboxId:d36c6ca1aea23dcebf31d455986aa5167dc7999a23d13d52e11ae86b1c824d15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760636713844892987,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-npsls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb852db-a2c1-43ca-aec1-05e353515731,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7bff05edf6af8a6e182bfefe2862b836118b91da6de4bdc200c523661a360d8,PodSandboxId:84c6c3ef92e9236f5a3218014904da9dea41ad1081a4bf28feb60e6c1e56baf7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760636701775657081,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-019580,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ebb29acd6b80239edbb1fc6d9b00683,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.k
ubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c177a055524087465b344b0845bcb679706c51b7a09387f24d20258d91fe4cef,PodSandboxId:6f15e6f7fedd6c6435faf9743361f12da9940cd643447c5cbe005222f36366a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760636701811009705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-019580,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56
c01f9b6f4e047ed243275a9d18f377,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d08c9206e01edaf0f19dcfbc386ed1f07389f3858cbd3743f2b315c82a2cb1ff,PodSandboxId:0b730c3eb5030d8c27b25e0defa2ce2b534f678b4b0f27a4682a4cb9dba44dbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760636701763029977,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes
.pod.name: kube-apiserver-addons-019580,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c40222361a51f2d447ed6184675afe4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89473df087feb8e67c2145c57ee4923255435ad5936e1e4b6fdb998a89b70f83,PodSandboxId:49a045450381e96f6468b5e862f15931927815d478087934f5bee83beb23f763,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:176063670
1724921723,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-019580,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23e9f2744b5657e1b81e6dd400d61812,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ebeeb959-387b-407a-b637-4d07cbb03953 name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.676936482Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.677167059Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.683540174Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=30723c12-ffb7-44be-a578-0a115d928620 name=/runtime.v1.RuntimeService/Version
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.683628433Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30723c12-ffb7-44be-a578-0a115d928620 name=/runtime.v1.RuntimeService/Version
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.685301417Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1d82d68-d831-4aea-89a7-8f77f2d1b673 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.686578347Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760637048686552524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1d82d68-d831-4aea-89a7-8f77f2d1b673 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.687440764Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b37ccac7-847a-46e5-b799-8f2019a59a9e name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.687514796Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b37ccac7-847a-46e5-b799-8f2019a59a9e name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 17:50:48 addons-019580 crio[819]: time="2025-10-16 17:50:48.688211134Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29321141730a051690abcaf96a324160fe7f2e39c31e7cc16ac3659151578b08,PodSandboxId:d20b036b012034b87696055cef309bf78a32142144b4a319106f8c2d9e786b00,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760636906389732509,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d772ea5-6e5c-457a-a18c-fd5017516390,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8432bd68eaec68b008888a3f15042a402594ae5c7db25eb4be6daed245b2c50c,PodSandboxId:6910931def666b54eca41898e28cf4aab2abec81d3bb6ec19fb9bca888c84efd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760636877852501903,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d04de663-3415-49fc-8de0-6c2bcb2781c1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a83ae4a84e0766bfbe45990bba2dfdc2adca1921267f8eaabd5938602a2a2ff0,PodSandboxId:02e2d18ecffeb68da4d8800c039bc89fb440b20090e46e743b4143c4deddd515,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760636867417403929,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-rhrxq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1f3aed82-a02e-4f79-b867-a0110167ce6c,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c48396e46409bdab6e9131de9de3ab2381293a63c05dd244acad92ef9fab9fe6,PodSandboxId:580e68211b71239312f77f819dc904a2142d381abe52c3402a2618fff5cc0cb3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760636787803143271,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h4c46,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d38d377b-b728-4a5c-bb80-7e82d5f097f7,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c53a3cd0331029eb0b4e7c4bf44afa86b4919086c3f0df9689c247469a6f812e,PodSandboxId:76440e4561638cb3910fc5d4f772c6acf42eb2500171a405ea39bb493d99fb24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760636786306953506,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-648wf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 74ca283b-2124-4219-adc2-2019c569c952,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641ef48c6334d8dff9f6202451d91f94e74b0cde84ff144710c1e9da7c8e6c74,PodSandboxId:a26fc2fc65d8254704e1b74eb09606a9ba779cd19acd298de8d074a1367a4254,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760636775090915912,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-q89mj,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: d5744fd9-0134-43c6-8f05-f31e6045d33a,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b857dfa9e72f862930c918442e5d3c2ebb58d3445ce68222bd83a814858cc09e,PodSandboxId:9393ec65aeaa85f76474e00bb65130d9d4a3a688bd93b20401239945361ee4e6,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760636760057098446,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 277dd7f0-484c-49a7-9288-696bb6c358fc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccec12606b6cbbc60f88f0f1329b04d44a0063cc3698b7e275b8583df81dbafe,PodSandboxId:1546312522bb2447591d9b5d7dd712129c5284d04d12445
f08a158d90788550a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760636750834280552,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9dsld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b8b7737-1a02-462d-a0a0-742829716fd8,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b98ed62270e2be0e51ff399836d5d81848498ba5965a79fee0f41e99a17ecd3,PodSandboxId:fb34634
1d7c388ecf1351fa5eb3adc8067d3a11915e8612a2cd1b6476c105368,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760636720172008526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efdb302c-4425-42dd-b594-7e6a54836850,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb9eee15cdd641392160d66491f04a8c739137d0a7a0ace22641906bd6df852,PodSandboxId:e67d58e116f251399bf
bee3aa22582f67ae5321c9528e8019d9c854344985025,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760636714487777458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bclq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e00fd9-01b4-49fc-a966-4b66cf7511b9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab379979b815073927b1ee1eb8c5f66519846cef0a604e200dd924314f0281ae,PodSandboxId:d36c6ca1aea23dcebf31d455986aa5167dc7999a23d13d52e11ae86b1c824d15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760636713844892987,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-npsls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb852db-a2c1-43ca-aec1-05e353515731,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7bff05edf6af8a6e182bfefe2862b836118b91da6de4bdc200c523661a360d8,PodSandboxId:84c6c3ef92e9236f5a3218014904da9dea41ad1081a4bf28feb60e6c1e56baf7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760636701775657081,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-019580,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ebb29acd6b80239edbb1fc6d9b00683,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.k
ubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c177a055524087465b344b0845bcb679706c51b7a09387f24d20258d91fe4cef,PodSandboxId:6f15e6f7fedd6c6435faf9743361f12da9940cd643447c5cbe005222f36366a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760636701811009705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-019580,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56
c01f9b6f4e047ed243275a9d18f377,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d08c9206e01edaf0f19dcfbc386ed1f07389f3858cbd3743f2b315c82a2cb1ff,PodSandboxId:0b730c3eb5030d8c27b25e0defa2ce2b534f678b4b0f27a4682a4cb9dba44dbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760636701763029977,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes
.pod.name: kube-apiserver-addons-019580,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c40222361a51f2d447ed6184675afe4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89473df087feb8e67c2145c57ee4923255435ad5936e1e4b6fdb998a89b70f83,PodSandboxId:49a045450381e96f6468b5e862f15931927815d478087934f5bee83beb23f763,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:176063670
1724921723,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-019580,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23e9f2744b5657e1b81e6dd400d61812,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b37ccac7-847a-46e5-b799-8f2019a59a9e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	29321141730a0       docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                              2 minutes ago       Running             nginx                     0                   d20b036b01203       nginx
	8432bd68eaec6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   6910931def666       busybox
	a83ae4a84e076       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             3 minutes ago       Running             controller                0                   02e2d18ecffeb       ingress-nginx-controller-675c5ddd98-rhrxq
	c48396e46409b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   4 minutes ago       Exited              patch                     0                   580e68211b712       ingress-nginx-admission-patch-h4c46
	c53a3cd033102       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   4 minutes ago       Exited              create                    0                   76440e4561638       ingress-nginx-admission-create-648wf
	641ef48c6334d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            4 minutes ago       Running             gadget                    0                   a26fc2fc65d82       gadget-q89mj
	b857dfa9e72f8       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   9393ec65aeaa8       kube-ingress-dns-minikube
	ccec12606b6cb       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   1546312522bb2       amd-gpu-device-plugin-9dsld
	6b98ed62270e2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   fb346341d7c38       storage-provisioner
	9eb9eee15cdd6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   e67d58e116f25       coredns-66bc5c9577-bclq8
	ab379979b8150       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   d36c6ca1aea23       kube-proxy-npsls
	c177a05552408       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   6f15e6f7fedd6       kube-scheduler-addons-019580
	b7bff05edf6af       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   84c6c3ef92e92       kube-controller-manager-addons-019580
	d08c9206e01ed       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   0b730c3eb5030       kube-apiserver-addons-019580
	89473df087feb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   49a045450381e       etcd-addons-019580
	
	
	==> coredns [9eb9eee15cdd641392160d66491f04a8c739137d0a7a0ace22641906bd6df852] <==
	[INFO] 10.244.0.8:56314 - 53482 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000922432s
	[INFO] 10.244.0.8:56314 - 51258 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000126118s
	[INFO] 10.244.0.8:56314 - 38538 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000105121s
	[INFO] 10.244.0.8:56314 - 43096 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000157965s
	[INFO] 10.244.0.8:56314 - 14103 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000153155s
	[INFO] 10.244.0.8:56314 - 4336 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000095659s
	[INFO] 10.244.0.8:56314 - 1387 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000073901s
	[INFO] 10.244.0.8:37749 - 64461 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000189493s
	[INFO] 10.244.0.8:37749 - 64128 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00035607s
	[INFO] 10.244.0.8:58412 - 52450 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000151334s
	[INFO] 10.244.0.8:58412 - 52744 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073186s
	[INFO] 10.244.0.8:52952 - 61094 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000125538s
	[INFO] 10.244.0.8:52952 - 60824 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000230145s
	[INFO] 10.244.0.8:44083 - 4640 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000592137s
	[INFO] 10.244.0.8:44083 - 5109 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000334791s
	[INFO] 10.244.0.23:38521 - 2045 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000499311s
	[INFO] 10.244.0.23:43502 - 56290 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000612917s
	[INFO] 10.244.0.23:52754 - 22515 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000111809s
	[INFO] 10.244.0.23:52782 - 5876 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000099756s
	[INFO] 10.244.0.23:49334 - 50161 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000105156s
	[INFO] 10.244.0.23:39672 - 22231 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000177422s
	[INFO] 10.244.0.23:47344 - 65416 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.003646659s
	[INFO] 10.244.0.23:36489 - 18118 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004770139s
	[INFO] 10.244.0.26:47632 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000476977s
	[INFO] 10.244.0.26:40736 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000114086s
	
	
	==> describe nodes <==
	Name:               addons-019580
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-019580
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=addons-019580
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T17_45_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-019580
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 17:45:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-019580
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 17:50:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 17:49:13 +0000   Thu, 16 Oct 2025 17:45:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 17:49:13 +0000   Thu, 16 Oct 2025 17:45:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 17:49:13 +0000   Thu, 16 Oct 2025 17:45:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 17:49:13 +0000   Thu, 16 Oct 2025 17:45:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    addons-019580
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 718eea9deb614b5e8aced4ee8245b513
	  System UUID:                718eea9d-eb61-4b5e-8ace-d4ee8245b513
	  Boot ID:                    33045919-22ac-4b50-a4f6-05dcb8e59536
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  default                     hello-world-app-5d498dc89-f4952              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  gadget                      gadget-q89mj                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-rhrxq    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m26s
	  kube-system                 amd-gpu-device-plugin-9dsld                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 coredns-66bc5c9577-bclq8                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m35s
	  kube-system                 etcd-addons-019580                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m41s
	  kube-system                 kube-apiserver-addons-019580                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m41s
	  kube-system                 kube-controller-manager-addons-019580        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m41s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-proxy-npsls                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-scheduler-addons-019580                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m41s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m33s  kube-proxy       
	  Normal  Starting                 5m41s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m41s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m41s  kubelet          Node addons-019580 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m41s  kubelet          Node addons-019580 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m41s  kubelet          Node addons-019580 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m40s  kubelet          Node addons-019580 status is now: NodeReady
	  Normal  RegisteredNode           5m37s  node-controller  Node addons-019580 event: Registered Node addons-019580 in Controller
	  Normal  CIDRAssignmentFailed     5m37s  cidrAllocator    Node addons-019580 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +0.219869] kauditd_printk_skb: 18 callbacks suppressed
	[  +1.331198] kauditd_printk_skb: 333 callbacks suppressed
	[  +0.366373] kauditd_printk_skb: 359 callbacks suppressed
	[  +2.794744] kauditd_printk_skb: 308 callbacks suppressed
	[ +17.361518] kauditd_printk_skb: 11 callbacks suppressed
	[Oct16 17:46] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.111854] kauditd_printk_skb: 32 callbacks suppressed
	[  +8.919722] kauditd_printk_skb: 26 callbacks suppressed
	[  +9.770812] kauditd_printk_skb: 35 callbacks suppressed
	[  +2.008601] kauditd_printk_skb: 109 callbacks suppressed
	[  +1.041749] kauditd_printk_skb: 136 callbacks suppressed
	[Oct16 17:47] kauditd_printk_skb: 43 callbacks suppressed
	[  +6.614247] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.000030] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.081362] kauditd_printk_skb: 47 callbacks suppressed
	[Oct16 17:48] kauditd_printk_skb: 22 callbacks suppressed
	[  +1.877303] kauditd_printk_skb: 134 callbacks suppressed
	[  +0.672203] kauditd_printk_skb: 128 callbacks suppressed
	[  +7.994702] kauditd_printk_skb: 62 callbacks suppressed
	[  +2.188895] kauditd_printk_skb: 93 callbacks suppressed
	[  +1.471087] kauditd_printk_skb: 61 callbacks suppressed
	[  +1.848399] kauditd_printk_skb: 159 callbacks suppressed
	[Oct16 17:49] kauditd_printk_skb: 30 callbacks suppressed
	[ +21.805683] kauditd_printk_skb: 107 callbacks suppressed
	[Oct16 17:50] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [89473df087feb8e67c2145c57ee4923255435ad5936e1e4b6fdb998a89b70f83] <==
	{"level":"info","ts":"2025-10-16T17:46:34.126775Z","caller":"traceutil/trace.go:172","msg":"trace[1552132326] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1094; }","duration":"162.885107ms","start":"2025-10-16T17:46:33.963884Z","end":"2025-10-16T17:46:34.126769Z","steps":["trace[1552132326] 'agreement among raft nodes before linearized reading'  (duration: 162.814957ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T17:46:34.127200Z","caller":"traceutil/trace.go:172","msg":"trace[595509739] transaction","detail":"{read_only:false; response_revision:1094; number_of_response:1; }","duration":"167.972085ms","start":"2025-10-16T17:46:33.958665Z","end":"2025-10-16T17:46:34.126637Z","steps":["trace[595509739] 'process raft request'  (duration: 167.844566ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-16T17:46:34.129468Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"159.123558ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-16T17:46:34.132426Z","caller":"traceutil/trace.go:172","msg":"trace[2032981274] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1094; }","duration":"162.087522ms","start":"2025-10-16T17:46:33.970328Z","end":"2025-10-16T17:46:34.132415Z","steps":["trace[2032981274] 'agreement among raft nodes before linearized reading'  (duration: 159.103334ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T17:46:34.130872Z","caller":"traceutil/trace.go:172","msg":"trace[772221258] transaction","detail":"{read_only:false; response_revision:1095; number_of_response:1; }","duration":"124.087146ms","start":"2025-10-16T17:46:34.006776Z","end":"2025-10-16T17:46:34.130863Z","steps":["trace[772221258] 'process raft request'  (duration: 122.704096ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-16T17:46:34.131119Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.520337ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-16T17:46:34.141679Z","caller":"traceutil/trace.go:172","msg":"trace[142361708] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1095; }","duration":"135.081446ms","start":"2025-10-16T17:46:34.006586Z","end":"2025-10-16T17:46:34.141667Z","steps":["trace[142361708] 'agreement among raft nodes before linearized reading'  (duration: 124.44507ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T17:46:35.300977Z","caller":"traceutil/trace.go:172","msg":"trace[947256357] transaction","detail":"{read_only:false; response_revision:1112; number_of_response:1; }","duration":"160.865831ms","start":"2025-10-16T17:46:35.139997Z","end":"2025-10-16T17:46:35.300862Z","steps":["trace[947256357] 'process raft request'  (duration: 160.226951ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-16T17:46:37.107371Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.124397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-16T17:46:37.107435Z","caller":"traceutil/trace.go:172","msg":"trace[236897724] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1127; }","duration":"134.235863ms","start":"2025-10-16T17:46:36.973188Z","end":"2025-10-16T17:46:37.107424Z","steps":["trace[236897724] 'range keys from in-memory index tree'  (duration: 134.073322ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-16T17:46:37.107480Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.563057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-16T17:46:37.107514Z","caller":"traceutil/trace.go:172","msg":"trace[475982016] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1127; }","duration":"102.609192ms","start":"2025-10-16T17:46:37.004897Z","end":"2025-10-16T17:46:37.107506Z","steps":["trace[475982016] 'range keys from in-memory index tree'  (duration: 102.465829ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-16T17:46:37.107618Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.526732ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-16T17:46:37.107634Z","caller":"traceutil/trace.go:172","msg":"trace[1503814991] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1127; }","duration":"139.546588ms","start":"2025-10-16T17:46:36.968082Z","end":"2025-10-16T17:46:37.107629Z","steps":["trace[1503814991] 'range keys from in-memory index tree'  (duration: 139.476476ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T17:47:44.306917Z","caller":"traceutil/trace.go:172","msg":"trace[1875660407] transaction","detail":"{read_only:false; response_revision:1235; number_of_response:1; }","duration":"190.673931ms","start":"2025-10-16T17:47:44.116228Z","end":"2025-10-16T17:47:44.306902Z","steps":["trace[1875660407] 'process raft request'  (duration: 189.271733ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T17:48:42.827896Z","caller":"traceutil/trace.go:172","msg":"trace[1639500177] linearizableReadLoop","detail":"{readStateIndex:1685; appliedIndex:1685; }","duration":"152.611286ms","start":"2025-10-16T17:48:42.675218Z","end":"2025-10-16T17:48:42.827830Z","steps":["trace[1639500177] 'read index received'  (duration: 152.602287ms)","trace[1639500177] 'applied index is now lower than readState.Index'  (duration: 7.784µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-16T17:48:42.828828Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.567297ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-16T17:48:42.828878Z","caller":"traceutil/trace.go:172","msg":"trace[1847380172] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1616; }","duration":"153.656897ms","start":"2025-10-16T17:48:42.675213Z","end":"2025-10-16T17:48:42.828870Z","steps":["trace[1847380172] 'agreement among raft nodes before linearized reading'  (duration: 152.722868ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-16T17:48:42.830114Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.074763ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots/default/new-snapshot-demo\" limit:1 ","response":"range_response_count:1 size:1086"}
	{"level":"info","ts":"2025-10-16T17:48:42.831310Z","caller":"traceutil/trace.go:172","msg":"trace[972481471] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots/default/new-snapshot-demo; range_end:; response_count:1; response_revision:1617; }","duration":"140.284266ms","start":"2025-10-16T17:48:42.691016Z","end":"2025-10-16T17:48:42.831301Z","steps":["trace[972481471] 'agreement among raft nodes before linearized reading'  (duration: 138.627471ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-16T17:48:42.830917Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.878361ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-10-16T17:48:42.833224Z","caller":"traceutil/trace.go:172","msg":"trace[1855218718] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1617; }","duration":"123.187734ms","start":"2025-10-16T17:48:42.710026Z","end":"2025-10-16T17:48:42.833214Z","steps":["trace[1855218718] 'agreement among raft nodes before linearized reading'  (duration: 120.35917ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T17:48:42.830426Z","caller":"traceutil/trace.go:172","msg":"trace[572026586] transaction","detail":"{read_only:false; response_revision:1617; number_of_response:1; }","duration":"191.978427ms","start":"2025-10-16T17:48:42.638439Z","end":"2025-10-16T17:48:42.830417Z","steps":["trace[572026586] 'process raft request'  (duration: 189.410111ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-16T17:48:42.833633Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.23716ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-9fdcc576-35c5-4162-a0f5-167380d6b2ab\" limit:1 ","response":"range_response_count:1 size:4175"}
	{"level":"info","ts":"2025-10-16T17:48:42.833671Z","caller":"traceutil/trace.go:172","msg":"trace[2077666728] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-9fdcc576-35c5-4162-a0f5-167380d6b2ab; range_end:; response_count:1; response_revision:1617; }","duration":"126.278466ms","start":"2025-10-16T17:48:42.707386Z","end":"2025-10-16T17:48:42.833664Z","steps":["trace[2077666728] 'agreement among raft nodes before linearized reading'  (duration: 123.015432ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:50:49 up 6 min,  0 users,  load average: 0.62, 1.01, 0.59
	Linux addons-019580 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [d08c9206e01edaf0f19dcfbc386ed1f07389f3858cbd3743f2b315c82a2cb1ff] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1016 17:46:03.449767       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.22.161:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.22.161:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.22.161:443: connect: connection refused" logger="UnhandledError"
	E1016 17:46:03.454559       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.22.161:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.22.161:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.22.161:443: connect: connection refused" logger="UnhandledError"
	I1016 17:46:03.518278       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1016 17:48:05.391021       1 conn.go:339] Error on socket receive: read tcp 192.168.39.210:8443->192.168.39.1:41996: use of closed network connection
	E1016 17:48:05.574331       1 conn.go:339] Error on socket receive: read tcp 192.168.39.210:8443->192.168.39.1:42022: use of closed network connection
	I1016 17:48:21.323613       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1016 17:48:21.548614       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.138.11"}
	I1016 17:48:28.697434       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.197.93"}
	I1016 17:48:42.611912       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1016 17:49:02.404002       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1016 17:49:02.404503       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1016 17:49:02.444467       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1016 17:49:02.444496       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1016 17:49:02.474983       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1016 17:49:02.475064       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1016 17:49:02.488357       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1016 17:49:02.488552       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1016 17:49:03.444694       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1016 17:49:03.488176       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1016 17:49:03.506921       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1016 17:49:04.473298       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E1016 17:49:11.017138       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1016 17:50:47.396138       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.150.127"}
	
	
	==> kube-controller-manager [b7bff05edf6af8a6e182bfefe2862b836118b91da6de4bdc200c523661a360d8] <==
	I1016 17:49:12.120083       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 17:49:12.171303       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1016 17:49:12.171387       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1016 17:49:13.463351       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1016 17:49:13.464468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1016 17:49:20.440833       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1016 17:49:20.441887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1016 17:49:21.352578       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1016 17:49:21.353848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1016 17:49:22.941658       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1016 17:49:22.942848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1016 17:49:34.430491       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1016 17:49:34.431483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1016 17:49:42.240573       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1016 17:49:42.241907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1016 17:49:43.685399       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1016 17:49:43.686713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1016 17:50:08.860816       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1016 17:50:08.861863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1016 17:50:11.482739       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1016 17:50:11.483825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1016 17:50:26.723296       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1016 17:50:26.724455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1016 17:50:47.566129       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1016 17:50:47.567349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [ab379979b815073927b1ee1eb8c5f66519846cef0a604e200dd924314f0281ae] <==
	I1016 17:45:14.522219       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 17:45:14.624188       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 17:45:14.624276       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.210"]
	E1016 17:45:14.624400       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 17:45:14.996647       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1016 17:45:14.996829       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1016 17:45:14.996921       1 server_linux.go:132] "Using iptables Proxier"
	I1016 17:45:15.017647       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 17:45:15.017974       1 server.go:527] "Version info" version="v1.34.1"
	I1016 17:45:15.017999       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 17:45:15.031139       1 config.go:200] "Starting service config controller"
	I1016 17:45:15.040461       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 17:45:15.031342       1 config.go:106] "Starting endpoint slice config controller"
	I1016 17:45:15.043727       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 17:45:15.031375       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 17:45:15.043752       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 17:45:15.040115       1 config.go:309] "Starting node config controller"
	I1016 17:45:15.049560       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 17:45:15.049576       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 17:45:15.140667       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 17:45:15.144101       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1016 17:45:15.144360       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c177a055524087465b344b0845bcb679706c51b7a09387f24d20258d91fe4cef] <==
	E1016 17:45:05.046929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 17:45:05.046927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 17:45:05.048022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 17:45:05.048183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1016 17:45:05.049314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 17:45:05.051826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 17:45:05.051962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 17:45:05.053865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 17:45:05.053958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 17:45:05.053978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 17:45:05.914193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 17:45:05.936806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 17:45:05.937092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 17:45:05.984127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 17:45:05.988395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 17:45:05.996113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 17:45:06.000004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 17:45:06.087003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 17:45:06.210172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 17:45:06.247030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 17:45:06.247244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 17:45:06.298110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 17:45:06.355860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 17:45:06.497532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1016 17:45:08.734076       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 17:49:26 addons-019580 kubelet[1517]: I1016 17:49:26.592216    1517 scope.go:117] "RemoveContainer" containerID="5a6c37470a438949901c4c5f47f6568563881eea6e55e3cbe72882c336008a91"
	Oct 16 17:49:26 addons-019580 kubelet[1517]: E1016 17:49:26.593370    1517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a6c37470a438949901c4c5f47f6568563881eea6e55e3cbe72882c336008a91\": container with ID starting with 5a6c37470a438949901c4c5f47f6568563881eea6e55e3cbe72882c336008a91 not found: ID does not exist" containerID="5a6c37470a438949901c4c5f47f6568563881eea6e55e3cbe72882c336008a91"
	Oct 16 17:49:26 addons-019580 kubelet[1517]: I1016 17:49:26.593418    1517 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a6c37470a438949901c4c5f47f6568563881eea6e55e3cbe72882c336008a91"} err="failed to get container status \"5a6c37470a438949901c4c5f47f6568563881eea6e55e3cbe72882c336008a91\": rpc error: code = NotFound desc = could not find container \"5a6c37470a438949901c4c5f47f6568563881eea6e55e3cbe72882c336008a91\": container with ID starting with 5a6c37470a438949901c4c5f47f6568563881eea6e55e3cbe72882c336008a91 not found: ID does not exist"
	Oct 16 17:49:27 addons-019580 kubelet[1517]: I1016 17:49:27.691391    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fbad5fb-9e82-4b65-ade5-89279b45e2f5" path="/var/lib/kubelet/pods/9fbad5fb-9e82-4b65-ade5-89279b45e2f5/volumes"
	Oct 16 17:49:28 addons-019580 kubelet[1517]: E1016 17:49:28.062965    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760636968062461134  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 16 17:49:28 addons-019580 kubelet[1517]: E1016 17:49:28.062993    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760636968062461134  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 16 17:49:38 addons-019580 kubelet[1517]: E1016 17:49:38.067013    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760636978066280521  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 16 17:49:38 addons-019580 kubelet[1517]: E1016 17:49:38.067122    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760636978066280521  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 16 17:49:48 addons-019580 kubelet[1517]: E1016 17:49:48.071170    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760636988070332525  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 16 17:49:48 addons-019580 kubelet[1517]: E1016 17:49:48.071220    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760636988070332525  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 16 17:49:58 addons-019580 kubelet[1517]: E1016 17:49:58.074315    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760636998073771214  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 16 17:49:58 addons-019580 kubelet[1517]: E1016 17:49:58.074341    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760636998073771214  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 16 17:49:58 addons-019580 kubelet[1517]: I1016 17:49:58.687022    1517 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-9dsld" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 17:50:08 addons-019580 kubelet[1517]: E1016 17:50:08.077535    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760637008076998278  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 16 17:50:08 addons-019580 kubelet[1517]: E1016 17:50:08.077586    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760637008076998278  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 16 17:50:18 addons-019580 kubelet[1517]: E1016 17:50:18.081159    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760637018080684680  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 16 17:50:18 addons-019580 kubelet[1517]: E1016 17:50:18.081212    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760637018080684680  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 16 17:50:18 addons-019580 kubelet[1517]: I1016 17:50:18.686454    1517 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 17:50:28 addons-019580 kubelet[1517]: E1016 17:50:28.084311    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760637028083831026  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 16 17:50:28 addons-019580 kubelet[1517]: E1016 17:50:28.084344    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760637028083831026  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 16 17:50:38 addons-019580 kubelet[1517]: E1016 17:50:38.087310    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760637038086865164  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 16 17:50:38 addons-019580 kubelet[1517]: E1016 17:50:38.087357    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760637038086865164  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 16 17:50:47 addons-019580 kubelet[1517]: I1016 17:50:47.337376    1517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f5mg\" (UniqueName: \"kubernetes.io/projected/c8a676c6-f35f-4092-89fa-80bc63335b34-kube-api-access-6f5mg\") pod \"hello-world-app-5d498dc89-f4952\" (UID: \"c8a676c6-f35f-4092-89fa-80bc63335b34\") " pod="default/hello-world-app-5d498dc89-f4952"
	Oct 16 17:50:48 addons-019580 kubelet[1517]: E1016 17:50:48.092736    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760637048091806519  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 16 17:50:48 addons-019580 kubelet[1517]: E1016 17:50:48.092776    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760637048091806519  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	
	
	==> storage-provisioner [6b98ed62270e2be0e51ff399836d5d81848498ba5965a79fee0f41e99a17ecd3] <==
	W1016 17:50:23.412718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:25.418704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:25.428137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:27.432322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:27.436663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:29.439993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:29.444890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:31.449470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:31.454413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:33.459325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:33.464427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:35.467517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:35.472497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:37.476489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:37.484734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:39.488827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:39.494172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:41.497932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:41.503733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:43.507820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:43.512953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:45.516410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:45.521476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:47.524826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:47.532573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-019580 -n addons-019580
helpers_test.go:269: (dbg) Run:  kubectl --context addons-019580 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-f4952 ingress-nginx-admission-create-648wf ingress-nginx-admission-patch-h4c46
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-019580 describe pod hello-world-app-5d498dc89-f4952 ingress-nginx-admission-create-648wf ingress-nginx-admission-patch-h4c46
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-019580 describe pod hello-world-app-5d498dc89-f4952 ingress-nginx-admission-create-648wf ingress-nginx-admission-patch-h4c46: exit status 1 (97.836235ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-f4952
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-019580/192.168.39.210
	Start Time:       Thu, 16 Oct 2025 17:50:47 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6f5mg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6f5mg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-f4952 to addons-019580
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-648wf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-h4c46" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-019580 describe pod hello-world-app-5d498dc89-f4952 ingress-nginx-admission-create-648wf ingress-nginx-admission-patch-h4c46: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019580 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-019580 addons disable ingress-dns --alsologtostderr -v=1: (1.042911149s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019580 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-019580 addons disable ingress --alsologtostderr -v=1: (7.786352557s)
--- FAIL: TestAddons/parallel/Ingress (157.85s)

                                                
                                    
x
+
TestPreload (168.76s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-747936 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
E1016 18:35:41.932269   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-747936 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m38.844805311s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-747936 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-747936 image pull gcr.io/k8s-minikube/busybox: (3.167194949s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-747936
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-747936: (7.665545786s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-747936 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-747936 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (56.213118592s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-747936 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-16 18:37:13.147269739 +0000 UTC m=+3204.711783886
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-747936 -n test-preload-747936
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-747936 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-747936 logs -n 25: (1.050813501s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-225382 ssh -n multinode-225382-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-225382     │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │ 16 Oct 25 18:23 UTC │
	│ ssh     │ multinode-225382 ssh -n multinode-225382 sudo cat /home/docker/cp-test_multinode-225382-m03_multinode-225382.txt                                                                    │ multinode-225382     │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │ 16 Oct 25 18:23 UTC │
	│ cp      │ multinode-225382 cp multinode-225382-m03:/home/docker/cp-test.txt multinode-225382-m02:/home/docker/cp-test_multinode-225382-m03_multinode-225382-m02.txt                           │ multinode-225382     │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │ 16 Oct 25 18:23 UTC │
	│ ssh     │ multinode-225382 ssh -n multinode-225382-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-225382     │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │ 16 Oct 25 18:23 UTC │
	│ ssh     │ multinode-225382 ssh -n multinode-225382-m02 sudo cat /home/docker/cp-test_multinode-225382-m03_multinode-225382-m02.txt                                                            │ multinode-225382     │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │ 16 Oct 25 18:23 UTC │
	│ node    │ multinode-225382 node stop m03                                                                                                                                                      │ multinode-225382     │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │ 16 Oct 25 18:23 UTC │
	│ node    │ multinode-225382 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-225382     │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │ 16 Oct 25 18:24 UTC │
	│ node    │ list -p multinode-225382                                                                                                                                                            │ multinode-225382     │ jenkins │ v1.37.0 │ 16 Oct 25 18:24 UTC │                     │
	│ stop    │ -p multinode-225382                                                                                                                                                                 │ multinode-225382     │ jenkins │ v1.37.0 │ 16 Oct 25 18:24 UTC │ 16 Oct 25 18:27 UTC │
	│ start   │ -p multinode-225382 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-225382     │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:29 UTC │
	│ node    │ list -p multinode-225382                                                                                                                                                            │ multinode-225382     │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ node    │ multinode-225382 node delete m03                                                                                                                                                    │ multinode-225382     │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ stop    │ multinode-225382 stop                                                                                                                                                               │ multinode-225382     │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:32 UTC │
	│ start   │ -p multinode-225382 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-225382     │ jenkins │ v1.37.0 │ 16 Oct 25 18:32 UTC │ 16 Oct 25 18:33 UTC │
	│ node    │ list -p multinode-225382                                                                                                                                                            │ multinode-225382     │ jenkins │ v1.37.0 │ 16 Oct 25 18:33 UTC │                     │
	│ start   │ -p multinode-225382-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-225382-m02 │ jenkins │ v1.37.0 │ 16 Oct 25 18:33 UTC │                     │
	│ start   │ -p multinode-225382-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-225382-m03 │ jenkins │ v1.37.0 │ 16 Oct 25 18:33 UTC │ 16 Oct 25 18:34 UTC │
	│ node    │ add -p multinode-225382                                                                                                                                                             │ multinode-225382     │ jenkins │ v1.37.0 │ 16 Oct 25 18:34 UTC │                     │
	│ delete  │ -p multinode-225382-m03                                                                                                                                                             │ multinode-225382-m03 │ jenkins │ v1.37.0 │ 16 Oct 25 18:34 UTC │ 16 Oct 25 18:34 UTC │
	│ delete  │ -p multinode-225382                                                                                                                                                                 │ multinode-225382     │ jenkins │ v1.37.0 │ 16 Oct 25 18:34 UTC │ 16 Oct 25 18:34 UTC │
	│ start   │ -p test-preload-747936 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-747936  │ jenkins │ v1.37.0 │ 16 Oct 25 18:34 UTC │ 16 Oct 25 18:36 UTC │
	│ image   │ test-preload-747936 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-747936  │ jenkins │ v1.37.0 │ 16 Oct 25 18:36 UTC │ 16 Oct 25 18:36 UTC │
	│ stop    │ -p test-preload-747936                                                                                                                                                              │ test-preload-747936  │ jenkins │ v1.37.0 │ 16 Oct 25 18:36 UTC │ 16 Oct 25 18:36 UTC │
	│ start   │ -p test-preload-747936 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-747936  │ jenkins │ v1.37.0 │ 16 Oct 25 18:36 UTC │ 16 Oct 25 18:37 UTC │
	│ image   │ test-preload-747936 image list                                                                                                                                                      │ test-preload-747936  │ jenkins │ v1.37.0 │ 16 Oct 25 18:37 UTC │ 16 Oct 25 18:37 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:36:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:36:16.770552   43185 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:36:16.770788   43185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:36:16.770796   43185 out.go:374] Setting ErrFile to fd 2...
	I1016 18:36:16.770800   43185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:36:16.770978   43185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8816/.minikube/bin
	I1016 18:36:16.771426   43185 out.go:368] Setting JSON to false
	I1016 18:36:16.772249   43185 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4715,"bootTime":1760635062,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:36:16.772333   43185 start.go:141] virtualization: kvm guest
	I1016 18:36:16.774089   43185 out.go:179] * [test-preload-747936] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:36:16.775429   43185 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:36:16.775424   43185 notify.go:220] Checking for updates...
	I1016 18:36:16.777462   43185 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:36:16.778529   43185 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8816/kubeconfig
	I1016 18:36:16.779681   43185 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8816/.minikube
	I1016 18:36:16.780710   43185 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:36:16.781755   43185 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:36:16.783173   43185 config.go:182] Loaded profile config "test-preload-747936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1016 18:36:16.783546   43185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:36:16.783615   43185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:36:16.796796   43185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I1016 18:36:16.797310   43185 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:36:16.797905   43185 main.go:141] libmachine: Using API Version  1
	I1016 18:36:16.797934   43185 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:36:16.798302   43185 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:36:16.798487   43185 main.go:141] libmachine: (test-preload-747936) Calling .DriverName
	I1016 18:36:16.800138   43185 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1016 18:36:16.801227   43185 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:36:16.801528   43185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:36:16.801602   43185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:36:16.814875   43185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36099
	I1016 18:36:16.815352   43185 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:36:16.815810   43185 main.go:141] libmachine: Using API Version  1
	I1016 18:36:16.815836   43185 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:36:16.816190   43185 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:36:16.816403   43185 main.go:141] libmachine: (test-preload-747936) Calling .DriverName
	I1016 18:36:16.849629   43185 out.go:179] * Using the kvm2 driver based on existing profile
	I1016 18:36:16.850766   43185 start.go:305] selected driver: kvm2
	I1016 18:36:16.850799   43185 start.go:925] validating driver "kvm2" against &{Name:test-preload-747936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-747936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:36:16.850927   43185 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:36:16.852089   43185 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:36:16.852204   43185 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21738-8816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1016 18:36:16.865465   43185 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1016 18:36:16.865488   43185 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21738-8816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1016 18:36:16.878678   43185 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1016 18:36:16.879023   43185 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:36:16.879048   43185 cni.go:84] Creating CNI manager for ""
	I1016 18:36:16.879090   43185 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1016 18:36:16.879154   43185 start.go:349] cluster config:
	{Name:test-preload-747936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-747936 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:36:16.879266   43185 iso.go:125] acquiring lock: {Name:mke23fa091b5b2529e94c2fba7379020f81892c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:36:16.881716   43185 out.go:179] * Starting "test-preload-747936" primary control-plane node in "test-preload-747936" cluster
	I1016 18:36:16.882691   43185 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1016 18:36:17.269570   43185 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1016 18:36:17.269600   43185 cache.go:58] Caching tarball of preloaded images
	I1016 18:36:17.269771   43185 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1016 18:36:17.271595   43185 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1016 18:36:17.272832   43185 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1016 18:36:17.375073   43185 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1016 18:36:17.375136   43185 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21738-8816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1016 18:36:27.866698   43185 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1016 18:36:27.866868   43185 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/test-preload-747936/config.json ...
	I1016 18:36:27.867152   43185 start.go:360] acquireMachinesLock for test-preload-747936: {Name:mkfc8a48414152b8c16845fb35ed65ca3f42bae5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1016 18:36:27.867224   43185 start.go:364] duration metric: took 46.981µs to acquireMachinesLock for "test-preload-747936"
	I1016 18:36:27.867246   43185 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:36:27.867252   43185 fix.go:54] fixHost starting: 
	I1016 18:36:27.867555   43185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:36:27.867592   43185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:36:27.881016   43185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34421
	I1016 18:36:27.881537   43185 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:36:27.882064   43185 main.go:141] libmachine: Using API Version  1
	I1016 18:36:27.882082   43185 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:36:27.882438   43185 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:36:27.882607   43185 main.go:141] libmachine: (test-preload-747936) Calling .DriverName
	I1016 18:36:27.882958   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetState
	I1016 18:36:27.885249   43185 fix.go:112] recreateIfNeeded on test-preload-747936: state=Stopped err=<nil>
	I1016 18:36:27.885299   43185 main.go:141] libmachine: (test-preload-747936) Calling .DriverName
	W1016 18:36:27.885434   43185 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:36:27.887570   43185 out.go:252] * Restarting existing kvm2 VM for "test-preload-747936" ...
	I1016 18:36:27.887600   43185 main.go:141] libmachine: (test-preload-747936) Calling .Start
	I1016 18:36:27.887771   43185 main.go:141] libmachine: (test-preload-747936) starting domain...
	I1016 18:36:27.887792   43185 main.go:141] libmachine: (test-preload-747936) ensuring networks are active...
	I1016 18:36:27.888658   43185 main.go:141] libmachine: (test-preload-747936) Ensuring network default is active
	I1016 18:36:27.889086   43185 main.go:141] libmachine: (test-preload-747936) Ensuring network mk-test-preload-747936 is active
	I1016 18:36:27.889613   43185 main.go:141] libmachine: (test-preload-747936) getting domain XML...
	I1016 18:36:27.890752   43185 main.go:141] libmachine: (test-preload-747936) DBG | starting domain XML:
	I1016 18:36:27.890774   43185 main.go:141] libmachine: (test-preload-747936) DBG | <domain type='kvm'>
	I1016 18:36:27.890806   43185 main.go:141] libmachine: (test-preload-747936) DBG |   <name>test-preload-747936</name>
	I1016 18:36:27.890831   43185 main.go:141] libmachine: (test-preload-747936) DBG |   <uuid>7b48e431-21de-4f8c-b815-9b5fcbc42beb</uuid>
	I1016 18:36:27.890841   43185 main.go:141] libmachine: (test-preload-747936) DBG |   <memory unit='KiB'>3145728</memory>
	I1016 18:36:27.890854   43185 main.go:141] libmachine: (test-preload-747936) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1016 18:36:27.890890   43185 main.go:141] libmachine: (test-preload-747936) DBG |   <vcpu placement='static'>2</vcpu>
	I1016 18:36:27.890918   43185 main.go:141] libmachine: (test-preload-747936) DBG |   <os>
	I1016 18:36:27.890936   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1016 18:36:27.890952   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <boot dev='cdrom'/>
	I1016 18:36:27.890975   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <boot dev='hd'/>
	I1016 18:36:27.890993   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <bootmenu enable='no'/>
	I1016 18:36:27.891005   43185 main.go:141] libmachine: (test-preload-747936) DBG |   </os>
	I1016 18:36:27.891013   43185 main.go:141] libmachine: (test-preload-747936) DBG |   <features>
	I1016 18:36:27.891025   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <acpi/>
	I1016 18:36:27.891036   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <apic/>
	I1016 18:36:27.891043   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <pae/>
	I1016 18:36:27.891049   43185 main.go:141] libmachine: (test-preload-747936) DBG |   </features>
	I1016 18:36:27.891061   43185 main.go:141] libmachine: (test-preload-747936) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1016 18:36:27.891075   43185 main.go:141] libmachine: (test-preload-747936) DBG |   <clock offset='utc'/>
	I1016 18:36:27.891087   43185 main.go:141] libmachine: (test-preload-747936) DBG |   <on_poweroff>destroy</on_poweroff>
	I1016 18:36:27.891096   43185 main.go:141] libmachine: (test-preload-747936) DBG |   <on_reboot>restart</on_reboot>
	I1016 18:36:27.891109   43185 main.go:141] libmachine: (test-preload-747936) DBG |   <on_crash>destroy</on_crash>
	I1016 18:36:27.891130   43185 main.go:141] libmachine: (test-preload-747936) DBG |   <devices>
	I1016 18:36:27.891146   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1016 18:36:27.891161   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <disk type='file' device='cdrom'>
	I1016 18:36:27.891173   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <driver name='qemu' type='raw'/>
	I1016 18:36:27.891188   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <source file='/home/jenkins/minikube-integration/21738-8816/.minikube/machines/test-preload-747936/boot2docker.iso'/>
	I1016 18:36:27.891200   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <target dev='hdc' bus='scsi'/>
	I1016 18:36:27.891208   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <readonly/>
	I1016 18:36:27.891218   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1016 18:36:27.891228   43185 main.go:141] libmachine: (test-preload-747936) DBG |     </disk>
	I1016 18:36:27.891237   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <disk type='file' device='disk'>
	I1016 18:36:27.891248   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1016 18:36:27.891264   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <source file='/home/jenkins/minikube-integration/21738-8816/.minikube/machines/test-preload-747936/test-preload-747936.rawdisk'/>
	I1016 18:36:27.891285   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <target dev='hda' bus='virtio'/>
	I1016 18:36:27.891297   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1016 18:36:27.891306   43185 main.go:141] libmachine: (test-preload-747936) DBG |     </disk>
	I1016 18:36:27.891319   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1016 18:36:27.891342   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1016 18:36:27.891357   43185 main.go:141] libmachine: (test-preload-747936) DBG |     </controller>
	I1016 18:36:27.891376   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1016 18:36:27.891395   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1016 18:36:27.891412   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1016 18:36:27.891426   43185 main.go:141] libmachine: (test-preload-747936) DBG |     </controller>
	I1016 18:36:27.891440   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <interface type='network'>
	I1016 18:36:27.891453   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <mac address='52:54:00:bf:4a:ed'/>
	I1016 18:36:27.891470   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <source network='mk-test-preload-747936'/>
	I1016 18:36:27.891484   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <model type='virtio'/>
	I1016 18:36:27.891495   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1016 18:36:27.891504   43185 main.go:141] libmachine: (test-preload-747936) DBG |     </interface>
	I1016 18:36:27.891513   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <interface type='network'>
	I1016 18:36:27.891524   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <mac address='52:54:00:99:ad:45'/>
	I1016 18:36:27.891544   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <source network='default'/>
	I1016 18:36:27.891559   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <model type='virtio'/>
	I1016 18:36:27.891573   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1016 18:36:27.891587   43185 main.go:141] libmachine: (test-preload-747936) DBG |     </interface>
	I1016 18:36:27.891609   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <serial type='pty'>
	I1016 18:36:27.891627   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <target type='isa-serial' port='0'>
	I1016 18:36:27.891641   43185 main.go:141] libmachine: (test-preload-747936) DBG |         <model name='isa-serial'/>
	I1016 18:36:27.891651   43185 main.go:141] libmachine: (test-preload-747936) DBG |       </target>
	I1016 18:36:27.891660   43185 main.go:141] libmachine: (test-preload-747936) DBG |     </serial>
	I1016 18:36:27.891671   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <console type='pty'>
	I1016 18:36:27.891680   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <target type='serial' port='0'/>
	I1016 18:36:27.891690   43185 main.go:141] libmachine: (test-preload-747936) DBG |     </console>
	I1016 18:36:27.891706   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <input type='mouse' bus='ps2'/>
	I1016 18:36:27.891720   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <input type='keyboard' bus='ps2'/>
	I1016 18:36:27.891737   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <audio id='1' type='none'/>
	I1016 18:36:27.891748   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <memballoon model='virtio'>
	I1016 18:36:27.891761   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1016 18:36:27.891772   43185 main.go:141] libmachine: (test-preload-747936) DBG |     </memballoon>
	I1016 18:36:27.891782   43185 main.go:141] libmachine: (test-preload-747936) DBG |     <rng model='virtio'>
	I1016 18:36:27.891799   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <backend model='random'>/dev/random</backend>
	I1016 18:36:27.891826   43185 main.go:141] libmachine: (test-preload-747936) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1016 18:36:27.891841   43185 main.go:141] libmachine: (test-preload-747936) DBG |     </rng>
	I1016 18:36:27.891853   43185 main.go:141] libmachine: (test-preload-747936) DBG |   </devices>
	I1016 18:36:27.891860   43185 main.go:141] libmachine: (test-preload-747936) DBG | </domain>
	I1016 18:36:27.891885   43185 main.go:141] libmachine: (test-preload-747936) DBG | 
	I1016 18:36:29.138185   43185 main.go:141] libmachine: (test-preload-747936) waiting for domain to start...
	I1016 18:36:29.139506   43185 main.go:141] libmachine: (test-preload-747936) domain is now running
	I1016 18:36:29.139527   43185 main.go:141] libmachine: (test-preload-747936) waiting for IP...
	I1016 18:36:29.140375   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:29.140965   43185 main.go:141] libmachine: (test-preload-747936) found domain IP: 192.168.39.234
	I1016 18:36:29.140993   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has current primary IP address 192.168.39.234 and MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:29.141002   43185 main.go:141] libmachine: (test-preload-747936) reserving static IP address...
	I1016 18:36:29.141512   43185 main.go:141] libmachine: (test-preload-747936) DBG | found host DHCP lease matching {name: "test-preload-747936", mac: "52:54:00:bf:4a:ed", ip: "192.168.39.234"} in network mk-test-preload-747936: {Iface:virbr1 ExpiryTime:2025-10-16 19:34:42 +0000 UTC Type:0 Mac:52:54:00:bf:4a:ed Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-747936 Clientid:01:52:54:00:bf:4a:ed}
	I1016 18:36:29.141546   43185 main.go:141] libmachine: (test-preload-747936) DBG | skip adding static IP to network mk-test-preload-747936 - found existing host DHCP lease matching {name: "test-preload-747936", mac: "52:54:00:bf:4a:ed", ip: "192.168.39.234"}
	I1016 18:36:29.141570   43185 main.go:141] libmachine: (test-preload-747936) reserved static IP address 192.168.39.234 for domain test-preload-747936
	I1016 18:36:29.141589   43185 main.go:141] libmachine: (test-preload-747936) waiting for SSH...
	I1016 18:36:29.141612   43185 main.go:141] libmachine: (test-preload-747936) DBG | Getting to WaitForSSH function...
	I1016 18:36:29.144006   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:29.144411   43185 main.go:141] libmachine: (test-preload-747936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:4a:ed", ip: ""} in network mk-test-preload-747936: {Iface:virbr1 ExpiryTime:2025-10-16 19:34:42 +0000 UTC Type:0 Mac:52:54:00:bf:4a:ed Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-747936 Clientid:01:52:54:00:bf:4a:ed}
	I1016 18:36:29.144441   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:29.144610   43185 main.go:141] libmachine: (test-preload-747936) DBG | Using SSH client type: external
	I1016 18:36:29.144664   43185 main.go:141] libmachine: (test-preload-747936) DBG | Using SSH private key: /home/jenkins/minikube-integration/21738-8816/.minikube/machines/test-preload-747936/id_rsa (-rw-------)
	I1016 18:36:29.144694   43185 main.go:141] libmachine: (test-preload-747936) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21738-8816/.minikube/machines/test-preload-747936/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1016 18:36:29.144705   43185 main.go:141] libmachine: (test-preload-747936) DBG | About to run SSH command:
	I1016 18:36:29.144718   43185 main.go:141] libmachine: (test-preload-747936) DBG | exit 0
	I1016 18:36:39.376631   43185 main.go:141] libmachine: (test-preload-747936) DBG | SSH cmd err, output: exit status 255: 
	I1016 18:36:39.376670   43185 main.go:141] libmachine: (test-preload-747936) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1016 18:36:39.376683   43185 main.go:141] libmachine: (test-preload-747936) DBG | command : exit 0
	I1016 18:36:39.376695   43185 main.go:141] libmachine: (test-preload-747936) DBG | err     : exit status 255
	I1016 18:36:39.376706   43185 main.go:141] libmachine: (test-preload-747936) DBG | output  : 
	I1016 18:36:42.378748   43185 main.go:141] libmachine: (test-preload-747936) DBG | Getting to WaitForSSH function...
	I1016 18:36:42.381786   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:42.382246   43185 main.go:141] libmachine: (test-preload-747936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:4a:ed", ip: ""} in network mk-test-preload-747936: {Iface:virbr1 ExpiryTime:2025-10-16 19:36:39 +0000 UTC Type:0 Mac:52:54:00:bf:4a:ed Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-747936 Clientid:01:52:54:00:bf:4a:ed}
	I1016 18:36:42.382284   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:42.382407   43185 main.go:141] libmachine: (test-preload-747936) DBG | Using SSH client type: external
	I1016 18:36:42.382433   43185 main.go:141] libmachine: (test-preload-747936) DBG | Using SSH private key: /home/jenkins/minikube-integration/21738-8816/.minikube/machines/test-preload-747936/id_rsa (-rw-------)
	I1016 18:36:42.382471   43185 main.go:141] libmachine: (test-preload-747936) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21738-8816/.minikube/machines/test-preload-747936/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1016 18:36:42.382487   43185 main.go:141] libmachine: (test-preload-747936) DBG | About to run SSH command:
	I1016 18:36:42.382518   43185 main.go:141] libmachine: (test-preload-747936) DBG | exit 0
	I1016 18:36:42.511231   43185 main.go:141] libmachine: (test-preload-747936) DBG | SSH cmd err, output: <nil>: 
	I1016 18:36:42.511672   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetConfigRaw
	I1016 18:36:42.512279   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetIP
	I1016 18:36:42.515219   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:42.515606   43185 main.go:141] libmachine: (test-preload-747936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:4a:ed", ip: ""} in network mk-test-preload-747936: {Iface:virbr1 ExpiryTime:2025-10-16 19:36:39 +0000 UTC Type:0 Mac:52:54:00:bf:4a:ed Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-747936 Clientid:01:52:54:00:bf:4a:ed}
	I1016 18:36:42.515632   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:42.515855   43185 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/test-preload-747936/config.json ...
	I1016 18:36:42.516079   43185 machine.go:93] provisionDockerMachine start ...
	I1016 18:36:42.516099   43185 main.go:141] libmachine: (test-preload-747936) Calling .DriverName
	I1016 18:36:42.516328   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHHostname
	I1016 18:36:42.519220   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:42.519688   43185 main.go:141] libmachine: (test-preload-747936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:4a:ed", ip: ""} in network mk-test-preload-747936: {Iface:virbr1 ExpiryTime:2025-10-16 19:36:39 +0000 UTC Type:0 Mac:52:54:00:bf:4a:ed Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-747936 Clientid:01:52:54:00:bf:4a:ed}
	I1016 18:36:42.519718   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:42.519919   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHPort
	I1016 18:36:42.520087   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHKeyPath
	I1016 18:36:42.520286   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHKeyPath
	I1016 18:36:42.520478   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHUsername
	I1016 18:36:42.520668   43185 main.go:141] libmachine: Using SSH client type: native
	I1016 18:36:42.520884   43185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I1016 18:36:42.520894   43185 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:36:42.626831   43185 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1016 18:36:42.626858   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetMachineName
	I1016 18:36:42.627104   43185 buildroot.go:166] provisioning hostname "test-preload-747936"
	I1016 18:36:42.627145   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetMachineName
	I1016 18:36:42.627368   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHHostname
	I1016 18:36:42.630431   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:42.630932   43185 main.go:141] libmachine: (test-preload-747936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:4a:ed", ip: ""} in network mk-test-preload-747936: {Iface:virbr1 ExpiryTime:2025-10-16 19:36:39 +0000 UTC Type:0 Mac:52:54:00:bf:4a:ed Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-747936 Clientid:01:52:54:00:bf:4a:ed}
	I1016 18:36:42.630958   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:42.631095   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHPort
	I1016 18:36:42.631277   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHKeyPath
	I1016 18:36:42.631443   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHKeyPath
	I1016 18:36:42.631594   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHUsername
	I1016 18:36:42.631768   43185 main.go:141] libmachine: Using SSH client type: native
	I1016 18:36:42.632028   43185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I1016 18:36:42.632043   43185 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-747936 && echo "test-preload-747936" | sudo tee /etc/hostname
	I1016 18:36:42.751610   43185 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-747936
	
	I1016 18:36:42.751636   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHHostname
	I1016 18:36:42.754695   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:42.755088   43185 main.go:141] libmachine: (test-preload-747936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:4a:ed", ip: ""} in network mk-test-preload-747936: {Iface:virbr1 ExpiryTime:2025-10-16 19:36:39 +0000 UTC Type:0 Mac:52:54:00:bf:4a:ed Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-747936 Clientid:01:52:54:00:bf:4a:ed}
	I1016 18:36:42.755141   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:42.755289   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHPort
	I1016 18:36:42.755506   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHKeyPath
	I1016 18:36:42.755671   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHKeyPath
	I1016 18:36:42.755812   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHUsername
	I1016 18:36:42.755997   43185 main.go:141] libmachine: Using SSH client type: native
	I1016 18:36:42.756259   43185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I1016 18:36:42.756279   43185 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-747936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-747936/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-747936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:36:42.876145   43185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:36:42.876177   43185 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8816/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8816/.minikube}
	I1016 18:36:42.876221   43185 buildroot.go:174] setting up certificates
	I1016 18:36:42.876241   43185 provision.go:84] configureAuth start
	I1016 18:36:42.876261   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetMachineName
	I1016 18:36:42.876554   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetIP
	I1016 18:36:42.879644   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:42.880036   43185 main.go:141] libmachine: (test-preload-747936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:4a:ed", ip: ""} in network mk-test-preload-747936: {Iface:virbr1 ExpiryTime:2025-10-16 19:36:39 +0000 UTC Type:0 Mac:52:54:00:bf:4a:ed Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-747936 Clientid:01:52:54:00:bf:4a:ed}
	I1016 18:36:42.880061   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:42.880244   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHHostname
	I1016 18:36:42.882560   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:42.882904   43185 main.go:141] libmachine: (test-preload-747936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:4a:ed", ip: ""} in network mk-test-preload-747936: {Iface:virbr1 ExpiryTime:2025-10-16 19:36:39 +0000 UTC Type:0 Mac:52:54:00:bf:4a:ed Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-747936 Clientid:01:52:54:00:bf:4a:ed}
	I1016 18:36:42.882935   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:42.883088   43185 provision.go:143] copyHostCerts
	I1016 18:36:42.883158   43185 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8816/.minikube/ca.pem, removing ...
	I1016 18:36:42.883172   43185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8816/.minikube/ca.pem
	I1016 18:36:42.883250   43185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8816/.minikube/ca.pem (1078 bytes)
	I1016 18:36:42.883372   43185 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8816/.minikube/cert.pem, removing ...
	I1016 18:36:42.883384   43185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8816/.minikube/cert.pem
	I1016 18:36:42.883422   43185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8816/.minikube/cert.pem (1123 bytes)
	I1016 18:36:42.883497   43185 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8816/.minikube/key.pem, removing ...
	I1016 18:36:42.883509   43185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8816/.minikube/key.pem
	I1016 18:36:42.883552   43185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8816/.minikube/key.pem (1675 bytes)
	I1016 18:36:42.883644   43185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca-key.pem org=jenkins.test-preload-747936 san=[127.0.0.1 192.168.39.234 localhost minikube test-preload-747936]
	I1016 18:36:42.997530   43185 provision.go:177] copyRemoteCerts
	I1016 18:36:42.997603   43185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:36:42.997627   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHHostname
	I1016 18:36:43.000706   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:43.001023   43185 main.go:141] libmachine: (test-preload-747936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:4a:ed", ip: ""} in network mk-test-preload-747936: {Iface:virbr1 ExpiryTime:2025-10-16 19:36:39 +0000 UTC Type:0 Mac:52:54:00:bf:4a:ed Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-747936 Clientid:01:52:54:00:bf:4a:ed}
	I1016 18:36:43.001059   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:43.001293   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHPort
	I1016 18:36:43.001456   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHKeyPath
	I1016 18:36:43.001599   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHUsername
	I1016 18:36:43.001772   43185 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/test-preload-747936/id_rsa Username:docker}
	I1016 18:36:43.085016   43185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 18:36:43.113942   43185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1016 18:36:43.141339   43185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:36:43.168793   43185 provision.go:87] duration metric: took 292.537282ms to configureAuth
	I1016 18:36:43.168818   43185 buildroot.go:189] setting minikube options for container-runtime
	I1016 18:36:43.169046   43185 config.go:182] Loaded profile config "test-preload-747936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1016 18:36:43.169146   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHHostname
	I1016 18:36:43.172143   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:43.172552   43185 main.go:141] libmachine: (test-preload-747936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:4a:ed", ip: ""} in network mk-test-preload-747936: {Iface:virbr1 ExpiryTime:2025-10-16 19:36:39 +0000 UTC Type:0 Mac:52:54:00:bf:4a:ed Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-747936 Clientid:01:52:54:00:bf:4a:ed}
	I1016 18:36:43.172580   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:43.172760   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHPort
	I1016 18:36:43.172952   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHKeyPath
	I1016 18:36:43.173113   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHKeyPath
	I1016 18:36:43.173278   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHUsername
	I1016 18:36:43.173421   43185 main.go:141] libmachine: Using SSH client type: native
	I1016 18:36:43.173608   43185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I1016 18:36:43.173622   43185 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:36:43.412206   43185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:36:43.412241   43185 machine.go:96] duration metric: took 896.148057ms to provisionDockerMachine
	I1016 18:36:43.412253   43185 start.go:293] postStartSetup for "test-preload-747936" (driver="kvm2")
	I1016 18:36:43.412263   43185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:36:43.412283   43185 main.go:141] libmachine: (test-preload-747936) Calling .DriverName
	I1016 18:36:43.412595   43185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:36:43.412640   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHHostname
	I1016 18:36:43.415923   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:43.416324   43185 main.go:141] libmachine: (test-preload-747936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:4a:ed", ip: ""} in network mk-test-preload-747936: {Iface:virbr1 ExpiryTime:2025-10-16 19:36:39 +0000 UTC Type:0 Mac:52:54:00:bf:4a:ed Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-747936 Clientid:01:52:54:00:bf:4a:ed}
	I1016 18:36:43.416351   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:43.416569   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHPort
	I1016 18:36:43.416782   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHKeyPath
	I1016 18:36:43.416926   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHUsername
	I1016 18:36:43.417046   43185 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/test-preload-747936/id_rsa Username:docker}
	I1016 18:36:43.499461   43185 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:36:43.503951   43185 info.go:137] Remote host: Buildroot 2025.02
	I1016 18:36:43.503971   43185 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8816/.minikube/addons for local assets ...
	I1016 18:36:43.504029   43185 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8816/.minikube/files for local assets ...
	I1016 18:36:43.504096   43185 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-8816/.minikube/files/etc/ssl/certs/127672.pem -> 127672.pem in /etc/ssl/certs
	I1016 18:36:43.504193   43185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:36:43.514894   43185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/files/etc/ssl/certs/127672.pem --> /etc/ssl/certs/127672.pem (1708 bytes)
	I1016 18:36:43.543049   43185 start.go:296] duration metric: took 130.781899ms for postStartSetup
	I1016 18:36:43.543088   43185 fix.go:56] duration metric: took 15.675836023s for fixHost
	I1016 18:36:43.543106   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHHostname
	I1016 18:36:43.546171   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:43.546559   43185 main.go:141] libmachine: (test-preload-747936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:4a:ed", ip: ""} in network mk-test-preload-747936: {Iface:virbr1 ExpiryTime:2025-10-16 19:36:39 +0000 UTC Type:0 Mac:52:54:00:bf:4a:ed Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-747936 Clientid:01:52:54:00:bf:4a:ed}
	I1016 18:36:43.546589   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:43.546757   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHPort
	I1016 18:36:43.546976   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHKeyPath
	I1016 18:36:43.547154   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHKeyPath
	I1016 18:36:43.547374   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHUsername
	I1016 18:36:43.547557   43185 main.go:141] libmachine: Using SSH client type: native
	I1016 18:36:43.547834   43185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I1016 18:36:43.547850   43185 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1016 18:36:43.653392   43185 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760639803.618393153
	
	I1016 18:36:43.653416   43185 fix.go:216] guest clock: 1760639803.618393153
	I1016 18:36:43.653422   43185 fix.go:229] Guest: 2025-10-16 18:36:43.618393153 +0000 UTC Remote: 2025-10-16 18:36:43.543091589 +0000 UTC m=+26.809796172 (delta=75.301564ms)
	I1016 18:36:43.653441   43185 fix.go:200] guest clock delta is within tolerance: 75.301564ms
	I1016 18:36:43.653445   43185 start.go:83] releasing machines lock for "test-preload-747936", held for 15.786207822s
	I1016 18:36:43.653465   43185 main.go:141] libmachine: (test-preload-747936) Calling .DriverName
	I1016 18:36:43.653717   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetIP
	I1016 18:36:43.656994   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:43.657434   43185 main.go:141] libmachine: (test-preload-747936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:4a:ed", ip: ""} in network mk-test-preload-747936: {Iface:virbr1 ExpiryTime:2025-10-16 19:36:39 +0000 UTC Type:0 Mac:52:54:00:bf:4a:ed Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-747936 Clientid:01:52:54:00:bf:4a:ed}
	I1016 18:36:43.657461   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:43.657615   43185 main.go:141] libmachine: (test-preload-747936) Calling .DriverName
	I1016 18:36:43.658075   43185 main.go:141] libmachine: (test-preload-747936) Calling .DriverName
	I1016 18:36:43.658245   43185 main.go:141] libmachine: (test-preload-747936) Calling .DriverName
	I1016 18:36:43.658313   43185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:36:43.658364   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHHostname
	I1016 18:36:43.658443   43185 ssh_runner.go:195] Run: cat /version.json
	I1016 18:36:43.658469   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHHostname
	I1016 18:36:43.661368   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:43.661654   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:43.661811   43185 main.go:141] libmachine: (test-preload-747936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:4a:ed", ip: ""} in network mk-test-preload-747936: {Iface:virbr1 ExpiryTime:2025-10-16 19:36:39 +0000 UTC Type:0 Mac:52:54:00:bf:4a:ed Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-747936 Clientid:01:52:54:00:bf:4a:ed}
	I1016 18:36:43.661849   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:43.662019   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHPort
	I1016 18:36:43.662092   43185 main.go:141] libmachine: (test-preload-747936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:4a:ed", ip: ""} in network mk-test-preload-747936: {Iface:virbr1 ExpiryTime:2025-10-16 19:36:39 +0000 UTC Type:0 Mac:52:54:00:bf:4a:ed Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-747936 Clientid:01:52:54:00:bf:4a:ed}
	I1016 18:36:43.662111   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:43.662214   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHKeyPath
	I1016 18:36:43.662280   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHPort
	I1016 18:36:43.662375   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHUsername
	I1016 18:36:43.662439   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHKeyPath
	I1016 18:36:43.662526   43185 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/test-preload-747936/id_rsa Username:docker}
	I1016 18:36:43.662564   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHUsername
	I1016 18:36:43.662681   43185 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/test-preload-747936/id_rsa Username:docker}
	I1016 18:36:43.740814   43185 ssh_runner.go:195] Run: systemctl --version
	I1016 18:36:43.776493   43185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:36:43.921819   43185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:36:43.928503   43185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:36:43.928560   43185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:36:43.947287   43185 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1016 18:36:43.947310   43185 start.go:495] detecting cgroup driver to use...
	I1016 18:36:43.947373   43185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:36:43.966603   43185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:36:43.982624   43185 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:36:43.982689   43185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:36:43.999073   43185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:36:44.015037   43185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:36:44.156331   43185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:36:44.374594   43185 docker.go:234] disabling docker service ...
	I1016 18:36:44.374656   43185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:36:44.392027   43185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:36:44.406764   43185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:36:44.568995   43185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:36:44.709778   43185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:36:44.725559   43185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:36:44.750388   43185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1016 18:36:44.750448   43185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:36:44.763885   43185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 18:36:44.763953   43185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:36:44.776440   43185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:36:44.788434   43185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:36:44.800206   43185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:36:44.812869   43185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:36:44.825866   43185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:36:44.846083   43185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:36:44.858033   43185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:36:44.868150   43185 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1016 18:36:44.868211   43185 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1016 18:36:44.887316   43185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:36:44.898968   43185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:36:45.037566   43185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:36:45.162017   43185 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:36:45.162087   43185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:36:45.167464   43185 start.go:563] Will wait 60s for crictl version
	I1016 18:36:45.167522   43185 ssh_runner.go:195] Run: which crictl
	I1016 18:36:45.171526   43185 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1016 18:36:45.213215   43185 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1016 18:36:45.213298   43185 ssh_runner.go:195] Run: crio --version
	I1016 18:36:45.242440   43185 ssh_runner.go:195] Run: crio --version
	I1016 18:36:45.272284   43185 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1016 18:36:45.273374   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetIP
	I1016 18:36:45.276576   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:45.277055   43185 main.go:141] libmachine: (test-preload-747936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:4a:ed", ip: ""} in network mk-test-preload-747936: {Iface:virbr1 ExpiryTime:2025-10-16 19:36:39 +0000 UTC Type:0 Mac:52:54:00:bf:4a:ed Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-747936 Clientid:01:52:54:00:bf:4a:ed}
	I1016 18:36:45.277084   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:45.277303   43185 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1016 18:36:45.281856   43185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:36:45.296182   43185 kubeadm.go:883] updating cluster {Name:test-preload-747936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-747936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:36:45.296295   43185 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1016 18:36:45.296341   43185 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:36:45.334752   43185 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1016 18:36:45.334819   43185 ssh_runner.go:195] Run: which lz4
	I1016 18:36:45.339317   43185 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1016 18:36:45.344315   43185 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1016 18:36:45.344348   43185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1016 18:36:46.792211   43185 crio.go:462] duration metric: took 1.452925261s to copy over tarball
	I1016 18:36:46.792275   43185 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1016 18:36:48.461447   43185 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.669145976s)
	I1016 18:36:48.461479   43185 crio.go:469] duration metric: took 1.669242363s to extract the tarball
	I1016 18:36:48.461488   43185 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1016 18:36:48.500892   43185 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:36:48.543884   43185 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:36:48.543907   43185 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:36:48.543914   43185 kubeadm.go:934] updating node { 192.168.39.234 8443 v1.32.0 crio true true} ...
	I1016 18:36:48.544018   43185 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-747936 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-747936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:36:48.544098   43185 ssh_runner.go:195] Run: crio config
	I1016 18:36:48.590523   43185 cni.go:84] Creating CNI manager for ""
	I1016 18:36:48.590551   43185 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1016 18:36:48.590592   43185 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:36:48.590623   43185 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.234 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-747936 NodeName:test-preload-747936 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:36:48.590774   43185 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-747936"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.234"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.234"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:36:48.590846   43185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1016 18:36:48.602858   43185 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:36:48.602942   43185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 18:36:48.613997   43185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1016 18:36:48.633025   43185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:36:48.653368   43185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1016 18:36:48.673049   43185 ssh_runner.go:195] Run: grep 192.168.39.234	control-plane.minikube.internal$ /etc/hosts
	I1016 18:36:48.677102   43185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.234	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:36:48.690747   43185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:36:48.835383   43185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:36:48.854890   43185 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/test-preload-747936 for IP: 192.168.39.234
	I1016 18:36:48.854914   43185 certs.go:195] generating shared ca certs ...
	I1016 18:36:48.854930   43185 certs.go:227] acquiring lock for ca certs: {Name:mkad193a0fb33fec0ea18d9a56f494b9b8779adb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:36:48.855095   43185 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8816/.minikube/ca.key
	I1016 18:36:48.855179   43185 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8816/.minikube/proxy-client-ca.key
	I1016 18:36:48.855194   43185 certs.go:257] generating profile certs ...
	I1016 18:36:48.855302   43185 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/test-preload-747936/client.key
	I1016 18:36:48.855376   43185 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/test-preload-747936/apiserver.key.6061fa62
	I1016 18:36:48.855436   43185 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/test-preload-747936/proxy-client.key
	I1016 18:36:48.855580   43185 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/12767.pem (1338 bytes)
	W1016 18:36:48.855623   43185 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-8816/.minikube/certs/12767_empty.pem, impossibly tiny 0 bytes
	I1016 18:36:48.855635   43185 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 18:36:48.855681   43185 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca.pem (1078 bytes)
	I1016 18:36:48.855712   43185 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:36:48.855738   43185 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/key.pem (1675 bytes)
	I1016 18:36:48.855794   43185 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/files/etc/ssl/certs/127672.pem (1708 bytes)
	I1016 18:36:48.856516   43185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:36:48.896656   43185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 18:36:48.928530   43185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:36:48.962818   43185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 18:36:48.990040   43185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/test-preload-747936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1016 18:36:49.021068   43185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/test-preload-747936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 18:36:49.048771   43185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/test-preload-747936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:36:49.076368   43185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/test-preload-747936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1016 18:36:49.104384   43185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:36:49.131882   43185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/certs/12767.pem --> /usr/share/ca-certificates/12767.pem (1338 bytes)
	I1016 18:36:49.159158   43185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/files/etc/ssl/certs/127672.pem --> /usr/share/ca-certificates/127672.pem (1708 bytes)
	I1016 18:36:49.186591   43185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:36:49.206376   43185 ssh_runner.go:195] Run: openssl version
	I1016 18:36:49.212517   43185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:36:49.224774   43185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:36:49.229550   43185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:36:49.229606   43185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:36:49.236457   43185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:36:49.248980   43185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12767.pem && ln -fs /usr/share/ca-certificates/12767.pem /etc/ssl/certs/12767.pem"
	I1016 18:36:49.261052   43185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12767.pem
	I1016 18:36:49.265801   43185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:53 /usr/share/ca-certificates/12767.pem
	I1016 18:36:49.265862   43185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12767.pem
	I1016 18:36:49.272574   43185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12767.pem /etc/ssl/certs/51391683.0"
	I1016 18:36:49.285040   43185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/127672.pem && ln -fs /usr/share/ca-certificates/127672.pem /etc/ssl/certs/127672.pem"
	I1016 18:36:49.297204   43185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127672.pem
	I1016 18:36:49.301893   43185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:53 /usr/share/ca-certificates/127672.pem
	I1016 18:36:49.301937   43185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127672.pem
	I1016 18:36:49.308607   43185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/127672.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:36:49.320776   43185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:36:49.325517   43185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:36:49.332776   43185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:36:49.339537   43185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:36:49.346442   43185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:36:49.353464   43185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:36:49.360463   43185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:36:49.367502   43185 kubeadm.go:400] StartCluster: {Name:test-preload-747936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-747936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:36:49.367575   43185 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:36:49.367655   43185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:36:49.406277   43185 cri.go:89] found id: ""
	I1016 18:36:49.406346   43185 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:36:49.418517   43185 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 18:36:49.418539   43185 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 18:36:49.418602   43185 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 18:36:49.429544   43185 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:36:49.429991   43185 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-747936" does not appear in /home/jenkins/minikube-integration/21738-8816/kubeconfig
	I1016 18:36:49.430097   43185 kubeconfig.go:62] /home/jenkins/minikube-integration/21738-8816/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-747936" cluster setting kubeconfig missing "test-preload-747936" context setting]
	I1016 18:36:49.430367   43185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8816/kubeconfig: {Name:mk4f128d20bbd14d57d7fe32f778269e6fd1a04c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:36:49.430825   43185 kapi.go:59] client config for test-preload-747936: &rest.Config{Host:"https://192.168.39.234:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-8816/.minikube/profiles/test-preload-747936/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-8816/.minikube/profiles/test-preload-747936/client.key", CAFile:"/home/jenkins/minikube-integration/21738-8816/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1016 18:36:49.431241   43185 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1016 18:36:49.431255   43185 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1016 18:36:49.431259   43185 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1016 18:36:49.431263   43185 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1016 18:36:49.431266   43185 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1016 18:36:49.431574   43185 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 18:36:49.442228   43185 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.234
	I1016 18:36:49.442260   43185 kubeadm.go:1160] stopping kube-system containers ...
	I1016 18:36:49.442273   43185 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1016 18:36:49.442323   43185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:36:49.479652   43185 cri.go:89] found id: ""
	I1016 18:36:49.479733   43185 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1016 18:36:49.497889   43185 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 18:36:49.509202   43185 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1016 18:36:49.509225   43185 kubeadm.go:157] found existing configuration files:
	
	I1016 18:36:49.509276   43185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1016 18:36:49.519467   43185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1016 18:36:49.519520   43185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1016 18:36:49.530336   43185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1016 18:36:49.540717   43185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1016 18:36:49.540783   43185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 18:36:49.551941   43185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1016 18:36:49.562076   43185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1016 18:36:49.562157   43185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1016 18:36:49.572589   43185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1016 18:36:49.582525   43185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1016 18:36:49.582586   43185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 18:36:49.593166   43185 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 18:36:49.604077   43185 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:36:49.656493   43185 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:36:50.760661   43185 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.104130971s)
	I1016 18:36:50.760739   43185 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:36:51.010290   43185 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:36:51.075344   43185 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:36:51.180028   43185 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:36:51.180134   43185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:36:51.680786   43185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:36:52.181153   43185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:36:52.680957   43185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:36:53.181001   43185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:36:53.680859   43185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:36:53.706544   43185 api_server.go:72] duration metric: took 2.526527205s to wait for apiserver process to appear ...
	I1016 18:36:53.706575   43185 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:36:53.706593   43185 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I1016 18:36:56.024836   43185 api_server.go:279] https://192.168.39.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1016 18:36:56.024877   43185 api_server.go:103] status: https://192.168.39.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1016 18:36:56.024896   43185 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I1016 18:36:56.090133   43185 api_server.go:279] https://192.168.39.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1016 18:36:56.090171   43185 api_server.go:103] status: https://192.168.39.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1016 18:36:56.207529   43185 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I1016 18:36:56.240226   43185 api_server.go:279] https://192.168.39.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:36:56.240255   43185 api_server.go:103] status: https://192.168.39.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:36:56.706872   43185 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I1016 18:36:56.714552   43185 api_server.go:279] https://192.168.39.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:36:56.714584   43185 api_server.go:103] status: https://192.168.39.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:36:57.207500   43185 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I1016 18:36:57.215471   43185 api_server.go:279] https://192.168.39.234:8443/healthz returned 200:
	ok
	I1016 18:36:57.225480   43185 api_server.go:141] control plane version: v1.32.0
	I1016 18:36:57.225513   43185 api_server.go:131] duration metric: took 3.518931618s to wait for apiserver health ...
	I1016 18:36:57.225522   43185 cni.go:84] Creating CNI manager for ""
	I1016 18:36:57.225530   43185 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1016 18:36:57.227157   43185 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1016 18:36:57.228510   43185 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1016 18:36:57.245553   43185 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1016 18:36:57.280496   43185 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:36:57.284908   43185 system_pods.go:59] 7 kube-system pods found
	I1016 18:36:57.284937   43185 system_pods.go:61] "coredns-668d6bf9bc-zfnfj" [3c92d0b3-f8f4-45cf-9353-43f4b1f26dde] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:36:57.284945   43185 system_pods.go:61] "etcd-test-preload-747936" [ca5546ef-307a-4764-8bea-cc0bcf09d6ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 18:36:57.284953   43185 system_pods.go:61] "kube-apiserver-test-preload-747936" [672b45f9-531d-417c-b937-a97da90b3442] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:36:57.284959   43185 system_pods.go:61] "kube-controller-manager-test-preload-747936" [2cd22f00-0505-47d2-85cb-84783d34d174] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:36:57.284965   43185 system_pods.go:61] "kube-proxy-5nj2g" [057f5ae8-4c19-4d1c-a68e-25a3d4ad7355] Running
	I1016 18:36:57.284970   43185 system_pods.go:61] "kube-scheduler-test-preload-747936" [6909e849-4241-41a2-bb9b-7e4050e89e31] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:36:57.284973   43185 system_pods.go:61] "storage-provisioner" [d6185384-2b63-49e9-8c84-8d0318e3f4ad] Running
	I1016 18:36:57.284979   43185 system_pods.go:74] duration metric: took 4.455818ms to wait for pod list to return data ...
	I1016 18:36:57.284988   43185 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:36:57.292753   43185 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1016 18:36:57.292777   43185 node_conditions.go:123] node cpu capacity is 2
	I1016 18:36:57.292788   43185 node_conditions.go:105] duration metric: took 7.795057ms to run NodePressure ...
	I1016 18:36:57.292835   43185 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:36:57.549402   43185 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1016 18:36:57.553008   43185 kubeadm.go:743] kubelet initialised
	I1016 18:36:57.553028   43185 kubeadm.go:744] duration metric: took 3.604423ms waiting for restarted kubelet to initialise ...
	I1016 18:36:57.553043   43185 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 18:36:57.567798   43185 ops.go:34] apiserver oom_adj: -16
	I1016 18:36:57.567817   43185 kubeadm.go:601] duration metric: took 8.149273531s to restartPrimaryControlPlane
	I1016 18:36:57.567826   43185 kubeadm.go:402] duration metric: took 8.200332304s to StartCluster
	I1016 18:36:57.567841   43185 settings.go:142] acquiring lock: {Name:mk8956f02e21b33221420cc620d69233a6a526cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:36:57.567910   43185 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8816/kubeconfig
	I1016 18:36:57.568453   43185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8816/kubeconfig: {Name:mk4f128d20bbd14d57d7fe32f778269e6fd1a04c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:36:57.568710   43185 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:36:57.568893   43185 config.go:182] Loaded profile config "test-preload-747936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1016 18:36:57.568838   43185 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:36:57.568943   43185 addons.go:69] Setting storage-provisioner=true in profile "test-preload-747936"
	I1016 18:36:57.568957   43185 addons.go:69] Setting default-storageclass=true in profile "test-preload-747936"
	I1016 18:36:57.568967   43185 addons.go:238] Setting addon storage-provisioner=true in "test-preload-747936"
	I1016 18:36:57.568971   43185 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-747936"
	W1016 18:36:57.568978   43185 addons.go:247] addon storage-provisioner should already be in state true
	I1016 18:36:57.569011   43185 host.go:66] Checking if "test-preload-747936" exists ...
	I1016 18:36:57.569422   43185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:36:57.569422   43185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:36:57.569470   43185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:36:57.569498   43185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:36:57.570264   43185 out.go:179] * Verifying Kubernetes components...
	I1016 18:36:57.571550   43185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:36:57.583205   43185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33457
	I1016 18:36:57.583215   43185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44725
	I1016 18:36:57.583667   43185 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:36:57.583720   43185 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:36:57.584190   43185 main.go:141] libmachine: Using API Version  1
	I1016 18:36:57.584210   43185 main.go:141] libmachine: Using API Version  1
	I1016 18:36:57.584224   43185 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:36:57.584279   43185 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:36:57.584569   43185 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:36:57.584592   43185 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:36:57.584752   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetState
	I1016 18:36:57.585169   43185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:36:57.585198   43185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:36:57.587367   43185 kapi.go:59] client config for test-preload-747936: &rest.Config{Host:"https://192.168.39.234:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-8816/.minikube/profiles/test-preload-747936/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-8816/.minikube/profiles/test-preload-747936/client.key", CAFile:"/home/jenkins/minikube-integration/21738-8816/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1016 18:36:57.587745   43185 addons.go:238] Setting addon default-storageclass=true in "test-preload-747936"
	W1016 18:36:57.587766   43185 addons.go:247] addon default-storageclass should already be in state true
	I1016 18:36:57.587794   43185 host.go:66] Checking if "test-preload-747936" exists ...
	I1016 18:36:57.588167   43185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:36:57.588199   43185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:36:57.599030   43185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43177
	I1016 18:36:57.599568   43185 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:36:57.600001   43185 main.go:141] libmachine: Using API Version  1
	I1016 18:36:57.600018   43185 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:36:57.600375   43185 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:36:57.600458   43185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34285
	I1016 18:36:57.600556   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetState
	I1016 18:36:57.600831   43185 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:36:57.601387   43185 main.go:141] libmachine: Using API Version  1
	I1016 18:36:57.601428   43185 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:36:57.601802   43185 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:36:57.602426   43185 main.go:141] libmachine: (test-preload-747936) Calling .DriverName
	I1016 18:36:57.602473   43185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:36:57.602524   43185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:36:57.604610   43185 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:36:57.605786   43185 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:36:57.605806   43185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:36:57.605824   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHHostname
	I1016 18:36:57.609393   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:57.609988   43185 main.go:141] libmachine: (test-preload-747936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:4a:ed", ip: ""} in network mk-test-preload-747936: {Iface:virbr1 ExpiryTime:2025-10-16 19:36:39 +0000 UTC Type:0 Mac:52:54:00:bf:4a:ed Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-747936 Clientid:01:52:54:00:bf:4a:ed}
	I1016 18:36:57.610014   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:57.610225   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHPort
	I1016 18:36:57.610420   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHKeyPath
	I1016 18:36:57.610587   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHUsername
	I1016 18:36:57.610808   43185 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/test-preload-747936/id_rsa Username:docker}
	I1016 18:36:57.616398   43185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
	I1016 18:36:57.616929   43185 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:36:57.617465   43185 main.go:141] libmachine: Using API Version  1
	I1016 18:36:57.617492   43185 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:36:57.617877   43185 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:36:57.618055   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetState
	I1016 18:36:57.620050   43185 main.go:141] libmachine: (test-preload-747936) Calling .DriverName
	I1016 18:36:57.620280   43185 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:36:57.620299   43185 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:36:57.620317   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHHostname
	I1016 18:36:57.623637   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:57.624069   43185 main.go:141] libmachine: (test-preload-747936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:4a:ed", ip: ""} in network mk-test-preload-747936: {Iface:virbr1 ExpiryTime:2025-10-16 19:36:39 +0000 UTC Type:0 Mac:52:54:00:bf:4a:ed Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-747936 Clientid:01:52:54:00:bf:4a:ed}
	I1016 18:36:57.624102   43185 main.go:141] libmachine: (test-preload-747936) DBG | domain test-preload-747936 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:4a:ed in network mk-test-preload-747936
	I1016 18:36:57.624287   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHPort
	I1016 18:36:57.624452   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHKeyPath
	I1016 18:36:57.624589   43185 main.go:141] libmachine: (test-preload-747936) Calling .GetSSHUsername
	I1016 18:36:57.624750   43185 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/test-preload-747936/id_rsa Username:docker}
	I1016 18:36:57.778515   43185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:36:57.796155   43185 node_ready.go:35] waiting up to 6m0s for node "test-preload-747936" to be "Ready" ...
	I1016 18:36:57.799951   43185 node_ready.go:49] node "test-preload-747936" is "Ready"
	I1016 18:36:57.799993   43185 node_ready.go:38] duration metric: took 3.776472ms for node "test-preload-747936" to be "Ready" ...
	I1016 18:36:57.800011   43185 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:36:57.800079   43185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:36:57.821522   43185 api_server.go:72] duration metric: took 252.781781ms to wait for apiserver process to appear ...
	I1016 18:36:57.821547   43185 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:36:57.821563   43185 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I1016 18:36:57.825703   43185 api_server.go:279] https://192.168.39.234:8443/healthz returned 200:
	ok
	I1016 18:36:57.826531   43185 api_server.go:141] control plane version: v1.32.0
	I1016 18:36:57.826549   43185 api_server.go:131] duration metric: took 4.996183ms to wait for apiserver health ...
	I1016 18:36:57.826556   43185 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:36:57.830391   43185 system_pods.go:59] 7 kube-system pods found
	I1016 18:36:57.830416   43185 system_pods.go:61] "coredns-668d6bf9bc-zfnfj" [3c92d0b3-f8f4-45cf-9353-43f4b1f26dde] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:36:57.830423   43185 system_pods.go:61] "etcd-test-preload-747936" [ca5546ef-307a-4764-8bea-cc0bcf09d6ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 18:36:57.830433   43185 system_pods.go:61] "kube-apiserver-test-preload-747936" [672b45f9-531d-417c-b937-a97da90b3442] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:36:57.830439   43185 system_pods.go:61] "kube-controller-manager-test-preload-747936" [2cd22f00-0505-47d2-85cb-84783d34d174] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:36:57.830446   43185 system_pods.go:61] "kube-proxy-5nj2g" [057f5ae8-4c19-4d1c-a68e-25a3d4ad7355] Running
	I1016 18:36:57.830452   43185 system_pods.go:61] "kube-scheduler-test-preload-747936" [6909e849-4241-41a2-bb9b-7e4050e89e31] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:36:57.830459   43185 system_pods.go:61] "storage-provisioner" [d6185384-2b63-49e9-8c84-8d0318e3f4ad] Running
	I1016 18:36:57.830465   43185 system_pods.go:74] duration metric: took 3.904061ms to wait for pod list to return data ...
	I1016 18:36:57.830475   43185 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:36:57.833418   43185 default_sa.go:45] found service account: "default"
	I1016 18:36:57.833436   43185 default_sa.go:55] duration metric: took 2.956497ms for default service account to be created ...
	I1016 18:36:57.833443   43185 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 18:36:57.835906   43185 system_pods.go:86] 7 kube-system pods found
	I1016 18:36:57.835928   43185 system_pods.go:89] "coredns-668d6bf9bc-zfnfj" [3c92d0b3-f8f4-45cf-9353-43f4b1f26dde] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:36:57.835934   43185 system_pods.go:89] "etcd-test-preload-747936" [ca5546ef-307a-4764-8bea-cc0bcf09d6ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 18:36:57.835941   43185 system_pods.go:89] "kube-apiserver-test-preload-747936" [672b45f9-531d-417c-b937-a97da90b3442] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:36:57.835947   43185 system_pods.go:89] "kube-controller-manager-test-preload-747936" [2cd22f00-0505-47d2-85cb-84783d34d174] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:36:57.835951   43185 system_pods.go:89] "kube-proxy-5nj2g" [057f5ae8-4c19-4d1c-a68e-25a3d4ad7355] Running
	I1016 18:36:57.835957   43185 system_pods.go:89] "kube-scheduler-test-preload-747936" [6909e849-4241-41a2-bb9b-7e4050e89e31] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:36:57.835961   43185 system_pods.go:89] "storage-provisioner" [d6185384-2b63-49e9-8c84-8d0318e3f4ad] Running
	I1016 18:36:57.835966   43185 system_pods.go:126] duration metric: took 2.519641ms to wait for k8s-apps to be running ...
	I1016 18:36:57.835974   43185 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 18:36:57.836012   43185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:36:57.851822   43185 system_svc.go:56] duration metric: took 15.837859ms WaitForService to wait for kubelet
	I1016 18:36:57.851847   43185 kubeadm.go:586] duration metric: took 283.111608ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:36:57.851869   43185 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:36:57.854797   43185 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1016 18:36:57.854817   43185 node_conditions.go:123] node cpu capacity is 2
	I1016 18:36:57.854826   43185 node_conditions.go:105] duration metric: took 2.953507ms to run NodePressure ...
	I1016 18:36:57.854836   43185 start.go:241] waiting for startup goroutines ...
	I1016 18:36:57.926207   43185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:36:57.935252   43185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:36:58.647904   43185 main.go:141] libmachine: Making call to close driver server
	I1016 18:36:58.647928   43185 main.go:141] libmachine: (test-preload-747936) Calling .Close
	I1016 18:36:58.647929   43185 main.go:141] libmachine: Making call to close driver server
	I1016 18:36:58.647950   43185 main.go:141] libmachine: (test-preload-747936) Calling .Close
	I1016 18:36:58.648258   43185 main.go:141] libmachine: Successfully made call to close driver server
	I1016 18:36:58.648290   43185 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 18:36:58.648302   43185 main.go:141] libmachine: Making call to close driver server
	I1016 18:36:58.648310   43185 main.go:141] libmachine: (test-preload-747936) Calling .Close
	I1016 18:36:58.648396   43185 main.go:141] libmachine: (test-preload-747936) DBG | Closing plugin on server side
	I1016 18:36:58.648424   43185 main.go:141] libmachine: Successfully made call to close driver server
	I1016 18:36:58.648436   43185 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 18:36:58.648444   43185 main.go:141] libmachine: Making call to close driver server
	I1016 18:36:58.648451   43185 main.go:141] libmachine: (test-preload-747936) Calling .Close
	I1016 18:36:58.648511   43185 main.go:141] libmachine: (test-preload-747936) DBG | Closing plugin on server side
	I1016 18:36:58.648517   43185 main.go:141] libmachine: Successfully made call to close driver server
	I1016 18:36:58.648529   43185 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 18:36:58.648655   43185 main.go:141] libmachine: Successfully made call to close driver server
	I1016 18:36:58.648668   43185 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 18:36:58.648675   43185 main.go:141] libmachine: (test-preload-747936) DBG | Closing plugin on server side
	I1016 18:36:58.654795   43185 main.go:141] libmachine: Making call to close driver server
	I1016 18:36:58.654814   43185 main.go:141] libmachine: (test-preload-747936) Calling .Close
	I1016 18:36:58.655039   43185 main.go:141] libmachine: Successfully made call to close driver server
	I1016 18:36:58.655051   43185 main.go:141] libmachine: Making call to close connection to plugin binary
	I1016 18:36:58.657679   43185 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1016 18:36:58.658747   43185 addons.go:514] duration metric: took 1.089924726s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1016 18:36:58.658781   43185 start.go:246] waiting for cluster config update ...
	I1016 18:36:58.658796   43185 start.go:255] writing updated cluster config ...
	I1016 18:36:58.659108   43185 ssh_runner.go:195] Run: rm -f paused
	I1016 18:36:58.665363   43185 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:36:58.665821   43185 kapi.go:59] client config for test-preload-747936: &rest.Config{Host:"https://192.168.39.234:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-8816/.minikube/profiles/test-preload-747936/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-8816/.minikube/profiles/test-preload-747936/client.key", CAFile:"/home/jenkins/minikube-integration/21738-8816/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1016 18:36:58.669091   43185 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-zfnfj" in "kube-system" namespace to be "Ready" or be gone ...
	W1016 18:37:00.675764   43185 pod_ready.go:104] pod "coredns-668d6bf9bc-zfnfj" is not "Ready", error: <nil>
	W1016 18:37:03.175026   43185 pod_ready.go:104] pod "coredns-668d6bf9bc-zfnfj" is not "Ready", error: <nil>
	W1016 18:37:05.175385   43185 pod_ready.go:104] pod "coredns-668d6bf9bc-zfnfj" is not "Ready", error: <nil>
	I1016 18:37:06.175909   43185 pod_ready.go:94] pod "coredns-668d6bf9bc-zfnfj" is "Ready"
	I1016 18:37:06.175948   43185 pod_ready.go:86] duration metric: took 7.506832203s for pod "coredns-668d6bf9bc-zfnfj" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:37:06.179086   43185 pod_ready.go:83] waiting for pod "etcd-test-preload-747936" in "kube-system" namespace to be "Ready" or be gone ...
	W1016 18:37:08.184879   43185 pod_ready.go:104] pod "etcd-test-preload-747936" is not "Ready", error: <nil>
	I1016 18:37:09.684501   43185 pod_ready.go:94] pod "etcd-test-preload-747936" is "Ready"
	I1016 18:37:09.684525   43185 pod_ready.go:86] duration metric: took 3.505411048s for pod "etcd-test-preload-747936" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:37:09.687201   43185 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-747936" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:37:10.693797   43185 pod_ready.go:94] pod "kube-apiserver-test-preload-747936" is "Ready"
	I1016 18:37:10.693851   43185 pod_ready.go:86] duration metric: took 1.00662583s for pod "kube-apiserver-test-preload-747936" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:37:10.697283   43185 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-747936" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:37:12.703479   43185 pod_ready.go:94] pod "kube-controller-manager-test-preload-747936" is "Ready"
	I1016 18:37:12.703517   43185 pod_ready.go:86] duration metric: took 2.006197489s for pod "kube-controller-manager-test-preload-747936" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:37:12.705761   43185 pod_ready.go:83] waiting for pod "kube-proxy-5nj2g" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:37:12.710385   43185 pod_ready.go:94] pod "kube-proxy-5nj2g" is "Ready"
	I1016 18:37:12.710406   43185 pod_ready.go:86] duration metric: took 4.611902ms for pod "kube-proxy-5nj2g" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:37:12.712563   43185 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-747936" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:37:12.883502   43185 pod_ready.go:94] pod "kube-scheduler-test-preload-747936" is "Ready"
	I1016 18:37:12.883538   43185 pod_ready.go:86] duration metric: took 170.95313ms for pod "kube-scheduler-test-preload-747936" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:37:12.883556   43185 pod_ready.go:40] duration metric: took 14.218162477s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:37:12.925003   43185 start.go:624] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1016 18:37:12.926656   43185 out.go:203] 
	W1016 18:37:12.927743   43185 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1016 18:37:12.928787   43185 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1016 18:37:12.930029   43185 out.go:179] * Done! kubectl is now configured to use "test-preload-747936" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.798239356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760639833798214527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=704d9185-bc9a-4a5f-94c8-10ec7e331c03 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.798844439Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=119f55d0-a891-4efa-ba77-77fd952f2ec2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.798897188Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=119f55d0-a891-4efa-ba77-77fd952f2ec2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.799075195Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:28966965f7f9dda78981409ab534a5f7f8cf237982b58e14897dc9ae211bbf60,PodSandboxId:4847f89e17e97bb2da3709897a2aeb51e11f8a80f2af4634cc4b6c3559e3fb75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760639820167286799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zfnfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c92d0b3-f8f4-45cf-9353-43f4b1f26dde,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcdb58ebc92b8f32603a4906f0fc7ae9166ef168d6749667741b4153dfab6be4,PodSandboxId:c46e6c1e48503bcc304517227b23639c1a356923eb830bf6f9de962f6bce396e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760639816599865433,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5nj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 057f5ae8-4c19-4d1c-a68e-25a3d4ad7355,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:502728fd15a4f7e34c4c7810ad019d662955328cb012921916742a60526a1f86,PodSandboxId:d7cfc705e3a8b680af114e10488760773c768d2db24246d0e5eec60bdf25fe4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760639816597551861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6
185384-2b63-49e9-8c84-8d0318e3f4ad,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de98591e4b695aaced10cf37545d238efb64fbe7e4eff7bd69952a1e84fc9b4,PodSandboxId:4a2c13b846c3f0c2536a8af190ab36703244fc5de2316a9cf1d8471692918f7e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760639813341658042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-747936,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 352fcc5ba4001c2d76ddca5099edb08d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23c25ca8532a05b2b0b15bb0000f2beb06f4477abe7c946c2eafdbfb99c7dc05,PodSandboxId:bdff71c3d4858bb831f2c36a0179e4dbf358a346cdc686d29eec47ccd39ce06f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760639813353375248,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-747936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d715558de3333fcf28797b1
aba6563a8,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5778ac25095494395c04a6763c26835f601f22b8a4a59e392cea250e617cac46,PodSandboxId:c004de06c4e1b811c77f62d218086e7e2ce6be6781f80b5c75b6aafe5b0f3028,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760639813329612717,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-747936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6aef50597c38c02aacfaf9563074a991,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e3db05604c547b70d1a4c88957ec719ece9c007b703c1d1c7f3e55895aa3a7,PodSandboxId:c3fc6478fcb5710a0a296071d81fc52f8886fbbd7c8a4034c54387e56b28f59d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760639813320007890,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-747936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1edfb87d8d49ce139668b211f1a72f8,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=119f55d0-a891-4efa-ba77-77fd952f2ec2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.836992529Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=927a7050-1db2-42f3-a08f-7452a605a6c9 name=/runtime.v1.RuntimeService/Version
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.837180246Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=927a7050-1db2-42f3-a08f-7452a605a6c9 name=/runtime.v1.RuntimeService/Version
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.838177613Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13bec97b-364d-4f5c-907d-e6f7d444e3e5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.838643851Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760639833838623714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13bec97b-364d-4f5c-907d-e6f7d444e3e5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.839301459Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5912ae14-84e7-4028-b61b-ed5a78d1c083 name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.839664847Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5912ae14-84e7-4028-b61b-ed5a78d1c083 name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.840005200Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:28966965f7f9dda78981409ab534a5f7f8cf237982b58e14897dc9ae211bbf60,PodSandboxId:4847f89e17e97bb2da3709897a2aeb51e11f8a80f2af4634cc4b6c3559e3fb75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760639820167286799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zfnfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c92d0b3-f8f4-45cf-9353-43f4b1f26dde,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcdb58ebc92b8f32603a4906f0fc7ae9166ef168d6749667741b4153dfab6be4,PodSandboxId:c46e6c1e48503bcc304517227b23639c1a356923eb830bf6f9de962f6bce396e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760639816599865433,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5nj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 057f5ae8-4c19-4d1c-a68e-25a3d4ad7355,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:502728fd15a4f7e34c4c7810ad019d662955328cb012921916742a60526a1f86,PodSandboxId:d7cfc705e3a8b680af114e10488760773c768d2db24246d0e5eec60bdf25fe4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760639816597551861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6
185384-2b63-49e9-8c84-8d0318e3f4ad,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de98591e4b695aaced10cf37545d238efb64fbe7e4eff7bd69952a1e84fc9b4,PodSandboxId:4a2c13b846c3f0c2536a8af190ab36703244fc5de2316a9cf1d8471692918f7e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760639813341658042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-747936,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 352fcc5ba4001c2d76ddca5099edb08d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23c25ca8532a05b2b0b15bb0000f2beb06f4477abe7c946c2eafdbfb99c7dc05,PodSandboxId:bdff71c3d4858bb831f2c36a0179e4dbf358a346cdc686d29eec47ccd39ce06f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760639813353375248,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-747936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d715558de3333fcf28797b1
aba6563a8,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5778ac25095494395c04a6763c26835f601f22b8a4a59e392cea250e617cac46,PodSandboxId:c004de06c4e1b811c77f62d218086e7e2ce6be6781f80b5c75b6aafe5b0f3028,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760639813329612717,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-747936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6aef50597c38c02aacfaf9563074a991,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e3db05604c547b70d1a4c88957ec719ece9c007b703c1d1c7f3e55895aa3a7,PodSandboxId:c3fc6478fcb5710a0a296071d81fc52f8886fbbd7c8a4034c54387e56b28f59d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760639813320007890,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-747936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1edfb87d8d49ce139668b211f1a72f8,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5912ae14-84e7-4028-b61b-ed5a78d1c083 name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.878862252Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c955450c-1960-4aca-a6cb-ce7549e99e8a name=/runtime.v1.RuntimeService/Version
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.878935147Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c955450c-1960-4aca-a6cb-ce7549e99e8a name=/runtime.v1.RuntimeService/Version
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.880177152Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bea122ee-0682-47ad-a1c9-b89c26ceebdc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.880753472Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760639833880730436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bea122ee-0682-47ad-a1c9-b89c26ceebdc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.881291213Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0dd837b6-bb3b-459c-ac4e-05d1ad3c97a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.881391368Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0dd837b6-bb3b-459c-ac4e-05d1ad3c97a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.881559615Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:28966965f7f9dda78981409ab534a5f7f8cf237982b58e14897dc9ae211bbf60,PodSandboxId:4847f89e17e97bb2da3709897a2aeb51e11f8a80f2af4634cc4b6c3559e3fb75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760639820167286799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zfnfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c92d0b3-f8f4-45cf-9353-43f4b1f26dde,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcdb58ebc92b8f32603a4906f0fc7ae9166ef168d6749667741b4153dfab6be4,PodSandboxId:c46e6c1e48503bcc304517227b23639c1a356923eb830bf6f9de962f6bce396e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760639816599865433,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5nj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 057f5ae8-4c19-4d1c-a68e-25a3d4ad7355,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:502728fd15a4f7e34c4c7810ad019d662955328cb012921916742a60526a1f86,PodSandboxId:d7cfc705e3a8b680af114e10488760773c768d2db24246d0e5eec60bdf25fe4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760639816597551861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6
185384-2b63-49e9-8c84-8d0318e3f4ad,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de98591e4b695aaced10cf37545d238efb64fbe7e4eff7bd69952a1e84fc9b4,PodSandboxId:4a2c13b846c3f0c2536a8af190ab36703244fc5de2316a9cf1d8471692918f7e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760639813341658042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-747936,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 352fcc5ba4001c2d76ddca5099edb08d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23c25ca8532a05b2b0b15bb0000f2beb06f4477abe7c946c2eafdbfb99c7dc05,PodSandboxId:bdff71c3d4858bb831f2c36a0179e4dbf358a346cdc686d29eec47ccd39ce06f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760639813353375248,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-747936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d715558de3333fcf28797b1
aba6563a8,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5778ac25095494395c04a6763c26835f601f22b8a4a59e392cea250e617cac46,PodSandboxId:c004de06c4e1b811c77f62d218086e7e2ce6be6781f80b5c75b6aafe5b0f3028,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760639813329612717,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-747936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6aef50597c38c02aacfaf9563074a991,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e3db05604c547b70d1a4c88957ec719ece9c007b703c1d1c7f3e55895aa3a7,PodSandboxId:c3fc6478fcb5710a0a296071d81fc52f8886fbbd7c8a4034c54387e56b28f59d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760639813320007890,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-747936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1edfb87d8d49ce139668b211f1a72f8,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0dd837b6-bb3b-459c-ac4e-05d1ad3c97a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.915152858Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb80018c-744f-4a71-b260-63ee03746995 name=/runtime.v1.RuntimeService/Version
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.915226421Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb80018c-744f-4a71-b260-63ee03746995 name=/runtime.v1.RuntimeService/Version
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.916486863Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a74322b-347b-4692-91e2-f7ed23080269 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.916906669Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760639833916887521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a74322b-347b-4692-91e2-f7ed23080269 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.917556609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=729f463c-8dd2-4469-9c61-082471755ce3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.917625572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=729f463c-8dd2-4469-9c61-082471755ce3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 16 18:37:13 test-preload-747936 crio[833]: time="2025-10-16 18:37:13.917776942Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:28966965f7f9dda78981409ab534a5f7f8cf237982b58e14897dc9ae211bbf60,PodSandboxId:4847f89e17e97bb2da3709897a2aeb51e11f8a80f2af4634cc4b6c3559e3fb75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760639820167286799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zfnfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c92d0b3-f8f4-45cf-9353-43f4b1f26dde,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcdb58ebc92b8f32603a4906f0fc7ae9166ef168d6749667741b4153dfab6be4,PodSandboxId:c46e6c1e48503bcc304517227b23639c1a356923eb830bf6f9de962f6bce396e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760639816599865433,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5nj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 057f5ae8-4c19-4d1c-a68e-25a3d4ad7355,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:502728fd15a4f7e34c4c7810ad019d662955328cb012921916742a60526a1f86,PodSandboxId:d7cfc705e3a8b680af114e10488760773c768d2db24246d0e5eec60bdf25fe4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760639816597551861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6
185384-2b63-49e9-8c84-8d0318e3f4ad,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de98591e4b695aaced10cf37545d238efb64fbe7e4eff7bd69952a1e84fc9b4,PodSandboxId:4a2c13b846c3f0c2536a8af190ab36703244fc5de2316a9cf1d8471692918f7e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760639813341658042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-747936,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 352fcc5ba4001c2d76ddca5099edb08d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23c25ca8532a05b2b0b15bb0000f2beb06f4477abe7c946c2eafdbfb99c7dc05,PodSandboxId:bdff71c3d4858bb831f2c36a0179e4dbf358a346cdc686d29eec47ccd39ce06f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760639813353375248,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-747936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d715558de3333fcf28797b1
aba6563a8,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5778ac25095494395c04a6763c26835f601f22b8a4a59e392cea250e617cac46,PodSandboxId:c004de06c4e1b811c77f62d218086e7e2ce6be6781f80b5c75b6aafe5b0f3028,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760639813329612717,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-747936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6aef50597c38c02aacfaf9563074a991,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e3db05604c547b70d1a4c88957ec719ece9c007b703c1d1c7f3e55895aa3a7,PodSandboxId:c3fc6478fcb5710a0a296071d81fc52f8886fbbd7c8a4034c54387e56b28f59d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760639813320007890,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-747936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1edfb87d8d49ce139668b211f1a72f8,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=729f463c-8dd2-4469-9c61-082471755ce3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	28966965f7f9d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   13 seconds ago      Running             coredns                   1                   4847f89e17e97       coredns-668d6bf9bc-zfnfj
	dcdb58ebc92b8       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   17 seconds ago      Running             kube-proxy                1                   c46e6c1e48503       kube-proxy-5nj2g
	502728fd15a4f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 seconds ago      Running             storage-provisioner       1                   d7cfc705e3a8b       storage-provisioner
	23c25ca8532a0       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   20 seconds ago      Running             etcd                      1                   bdff71c3d4858       etcd-test-preload-747936
	2de98591e4b69       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   20 seconds ago      Running             kube-controller-manager   1                   4a2c13b846c3f       kube-controller-manager-test-preload-747936
	5778ac2509549       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   20 seconds ago      Running             kube-apiserver            1                   c004de06c4e1b       kube-apiserver-test-preload-747936
	77e3db05604c5       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   20 seconds ago      Running             kube-scheduler            1                   c3fc6478fcb57       kube-scheduler-test-preload-747936
	
	
	==> coredns [28966965f7f9dda78981409ab534a5f7f8cf237982b58e14897dc9ae211bbf60] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:34303 - 64622 "HINFO IN 8625839506362453383.5986646062669186930. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020307241s
	
	
	==> describe nodes <==
	Name:               test-preload-747936
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-747936
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=test-preload-747936
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_35_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:35:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-747936
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:37:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:36:57 +0000   Thu, 16 Oct 2025 18:35:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:36:57 +0000   Thu, 16 Oct 2025 18:35:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:36:57 +0000   Thu, 16 Oct 2025 18:35:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 18:36:57 +0000   Thu, 16 Oct 2025 18:36:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    test-preload-747936
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b48e43121de4f8cb8159b5fcbc42beb
	  System UUID:                7b48e431-21de-4f8c-b815-9b5fcbc42beb
	  Boot ID:                    817b7b29-5ce1-4d8f-8195-3c2c22e09355
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-zfnfj                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     110s
	  kube-system                 etcd-test-preload-747936                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         116s
	  kube-system                 kube-apiserver-test-preload-747936             250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-test-preload-747936    200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-5nj2g                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-test-preload-747936             100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 108s                 kube-proxy       
	  Normal   Starting                 17s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m1s (x8 over 2m1s)  kubelet          Node test-preload-747936 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m1s (x8 over 2m1s)  kubelet          Node test-preload-747936 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m1s (x7 over 2m1s)  kubelet          Node test-preload-747936 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    115s                 kubelet          Node test-preload-747936 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  115s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  115s                 kubelet          Node test-preload-747936 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     115s                 kubelet          Node test-preload-747936 status is now: NodeHasSufficientPID
	  Normal   Starting                 115s                 kubelet          Starting kubelet.
	  Normal   NodeReady                114s                 kubelet          Node test-preload-747936 status is now: NodeReady
	  Normal   RegisteredNode           111s                 node-controller  Node test-preload-747936 event: Registered Node test-preload-747936 in Controller
	  Normal   Starting                 23s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  23s (x8 over 23s)    kubelet          Node test-preload-747936 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23s (x8 over 23s)    kubelet          Node test-preload-747936 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23s (x7 over 23s)    kubelet          Node test-preload-747936 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  23s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18s                  kubelet          Node test-preload-747936 has been rebooted, boot id: 817b7b29-5ce1-4d8f-8195-3c2c22e09355
	  Normal   RegisteredNode           15s                  node-controller  Node test-preload-747936 event: Registered Node test-preload-747936 in Controller
	
	
	==> dmesg <==
	[Oct16 18:36] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000046] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.012236] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.975556] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.082454] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.098982] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.499434] kauditd_printk_skb: 177 callbacks suppressed
	[Oct16 18:37] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [23c25ca8532a05b2b0b15bb0000f2beb06f4477abe7c946c2eafdbfb99c7dc05] <==
	{"level":"info","ts":"2025-10-16T18:36:53.812959Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.234:2380"}
	{"level":"info","ts":"2025-10-16T18:36:53.813161Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-16T18:36:53.816606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 switched to configuration voters=(16039877851787559060)"}
	{"level":"info","ts":"2025-10-16T18:36:53.817531Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6193f7f4ee516b71","local-member-id":"de9917ec5c740094","added-peer-id":"de9917ec5c740094","added-peer-peer-urls":["https://192.168.39.234:2380"]}
	{"level":"info","ts":"2025-10-16T18:36:53.817662Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6193f7f4ee516b71","local-member-id":"de9917ec5c740094","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-16T18:36:53.819396Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-16T18:36:53.819854Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.234:2380"}
	{"level":"info","ts":"2025-10-16T18:36:53.819889Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-16T18:36:53.819900Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-16T18:36:54.946633Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-16T18:36:54.946683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-16T18:36:54.946722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 received MsgPreVoteResp from de9917ec5c740094 at term 2"}
	{"level":"info","ts":"2025-10-16T18:36:54.946734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 became candidate at term 3"}
	{"level":"info","ts":"2025-10-16T18:36:54.946743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 received MsgVoteResp from de9917ec5c740094 at term 3"}
	{"level":"info","ts":"2025-10-16T18:36:54.946759Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 became leader at term 3"}
	{"level":"info","ts":"2025-10-16T18:36:54.946766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: de9917ec5c740094 elected leader de9917ec5c740094 at term 3"}
	{"level":"info","ts":"2025-10-16T18:36:54.948476Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-16T18:36:54.948623Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-16T18:36:54.948895Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-16T18:36:54.948926Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-16T18:36:54.948481Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"de9917ec5c740094","local-member-attributes":"{Name:test-preload-747936 ClientURLs:[https://192.168.39.234:2379]}","request-path":"/0/members/de9917ec5c740094/attributes","cluster-id":"6193f7f4ee516b71","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-16T18:36:54.949611Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-16T18:36:54.949656Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-16T18:36:54.950269Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-16T18:36:54.950286Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.234:2379"}
	
	
	==> kernel <==
	 18:37:14 up 0 min,  0 users,  load average: 0.42, 0.11, 0.04
	Linux test-preload-747936 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [5778ac25095494395c04a6763c26835f601f22b8a4a59e392cea250e617cac46] <==
	I1016 18:36:56.148605       1 policy_source.go:240] refreshing policies
	I1016 18:36:56.180940       1 shared_informer.go:320] Caches are synced for configmaps
	I1016 18:36:56.181150       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 18:36:56.182112       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1016 18:36:56.182829       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1016 18:36:56.182853       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1016 18:36:56.186025       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1016 18:36:56.186116       1 aggregator.go:171] initial CRD sync complete...
	I1016 18:36:56.186158       1 autoregister_controller.go:144] Starting autoregister controller
	I1016 18:36:56.186192       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 18:36:56.186215       1 cache.go:39] Caches are synced for autoregister controller
	I1016 18:36:56.187825       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 18:36:56.206785       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1016 18:36:56.211254       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1016 18:36:56.211296       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1016 18:36:56.222441       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E1016 18:36:56.229440       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1016 18:36:56.989588       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 18:36:57.382131       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1016 18:36:57.416218       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1016 18:36:57.446244       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 18:36:57.455600       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 18:36:59.601916       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1016 18:36:59.650103       1 controller.go:615] quota admission added evaluator for: endpoints
	I1016 18:36:59.700860       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2de98591e4b695aaced10cf37545d238efb64fbe7e4eff7bd69952a1e84fc9b4] <==
	I1016 18:36:59.316161       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1016 18:36:59.317302       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1016 18:36:59.317395       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1016 18:36:59.319762       1 shared_informer.go:320] Caches are synced for TTL
	I1016 18:36:59.322949       1 shared_informer.go:320] Caches are synced for node
	I1016 18:36:59.323003       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1016 18:36:59.323038       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1016 18:36:59.323042       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1016 18:36:59.323047       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1016 18:36:59.323278       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-747936"
	I1016 18:36:59.323597       1 shared_informer.go:320] Caches are synced for taint
	I1016 18:36:59.323673       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1016 18:36:59.323969       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-747936"
	I1016 18:36:59.324060       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1016 18:36:59.327229       1 shared_informer.go:320] Caches are synced for attach detach
	I1016 18:36:59.328682       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1016 18:36:59.331097       1 shared_informer.go:320] Caches are synced for endpoint
	I1016 18:36:59.333358       1 shared_informer.go:320] Caches are synced for garbage collector
	I1016 18:36:59.335543       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1016 18:36:59.340250       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1016 18:36:59.610646       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="307.4471ms"
	I1016 18:36:59.610715       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="40.332µs"
	I1016 18:37:01.255249       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="47.199µs"
	I1016 18:37:05.703672       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="11.720805ms"
	I1016 18:37:05.703761       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.924µs"
	
	
	==> kube-proxy [dcdb58ebc92b8f32603a4906f0fc7ae9166ef168d6749667741b4153dfab6be4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1016 18:36:56.804610       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1016 18:36:56.813878       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.234"]
	E1016 18:36:56.814011       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:36:56.847894       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1016 18:36:56.847936       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1016 18:36:56.847958       1 server_linux.go:170] "Using iptables Proxier"
	I1016 18:36:56.850482       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:36:56.850780       1 server.go:497] "Version info" version="v1.32.0"
	I1016 18:36:56.850804       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:36:56.852500       1 config.go:199] "Starting service config controller"
	I1016 18:36:56.852534       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1016 18:36:56.852559       1 config.go:105] "Starting endpoint slice config controller"
	I1016 18:36:56.852576       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1016 18:36:56.853120       1 config.go:329] "Starting node config controller"
	I1016 18:36:56.853161       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1016 18:36:56.953437       1 shared_informer.go:320] Caches are synced for node config
	I1016 18:36:56.953458       1 shared_informer.go:320] Caches are synced for service config
	I1016 18:36:56.953466       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [77e3db05604c547b70d1a4c88957ec719ece9c007b703c1d1c7f3e55895aa3a7] <==
	I1016 18:36:54.357715       1 serving.go:386] Generated self-signed cert in-memory
	W1016 18:36:56.075952       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1016 18:36:56.075992       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1016 18:36:56.076003       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1016 18:36:56.076014       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1016 18:36:56.119923       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1016 18:36:56.119975       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:36:56.126664       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:36:56.126706       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1016 18:36:56.133603       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1016 18:36:56.133986       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 18:36:56.228044       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 16 18:36:55 test-preload-747936 kubelet[1152]: E1016 18:36:55.198728    1152 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"test-preload-747936\" not found" node="test-preload-747936"
	Oct 16 18:36:56 test-preload-747936 kubelet[1152]: I1016 18:36:56.073745    1152 apiserver.go:52] "Watching apiserver"
	Oct 16 18:36:56 test-preload-747936 kubelet[1152]: E1016 18:36:56.089531    1152 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-zfnfj" podUID="3c92d0b3-f8f4-45cf-9353-43f4b1f26dde"
	Oct 16 18:36:56 test-preload-747936 kubelet[1152]: I1016 18:36:56.100546    1152 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 16 18:36:56 test-preload-747936 kubelet[1152]: I1016 18:36:56.204998    1152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/057f5ae8-4c19-4d1c-a68e-25a3d4ad7355-xtables-lock\") pod \"kube-proxy-5nj2g\" (UID: \"057f5ae8-4c19-4d1c-a68e-25a3d4ad7355\") " pod="kube-system/kube-proxy-5nj2g"
	Oct 16 18:36:56 test-preload-747936 kubelet[1152]: I1016 18:36:56.205053    1152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/057f5ae8-4c19-4d1c-a68e-25a3d4ad7355-lib-modules\") pod \"kube-proxy-5nj2g\" (UID: \"057f5ae8-4c19-4d1c-a68e-25a3d4ad7355\") " pod="kube-system/kube-proxy-5nj2g"
	Oct 16 18:36:56 test-preload-747936 kubelet[1152]: I1016 18:36:56.205070    1152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d6185384-2b63-49e9-8c84-8d0318e3f4ad-tmp\") pod \"storage-provisioner\" (UID: \"d6185384-2b63-49e9-8c84-8d0318e3f4ad\") " pod="kube-system/storage-provisioner"
	Oct 16 18:36:56 test-preload-747936 kubelet[1152]: E1016 18:36:56.205596    1152 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 16 18:36:56 test-preload-747936 kubelet[1152]: E1016 18:36:56.205822    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3c92d0b3-f8f4-45cf-9353-43f4b1f26dde-config-volume podName:3c92d0b3-f8f4-45cf-9353-43f4b1f26dde nodeName:}" failed. No retries permitted until 2025-10-16 18:36:56.705797775 +0000 UTC m=+5.716886504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3c92d0b3-f8f4-45cf-9353-43f4b1f26dde-config-volume") pod "coredns-668d6bf9bc-zfnfj" (UID: "3c92d0b3-f8f4-45cf-9353-43f4b1f26dde") : object "kube-system"/"coredns" not registered
	Oct 16 18:36:56 test-preload-747936 kubelet[1152]: I1016 18:36:56.271699    1152 kubelet_node_status.go:125] "Node was previously registered" node="test-preload-747936"
	Oct 16 18:36:56 test-preload-747936 kubelet[1152]: I1016 18:36:56.272528    1152 kubelet_node_status.go:79] "Successfully registered node" node="test-preload-747936"
	Oct 16 18:36:56 test-preload-747936 kubelet[1152]: I1016 18:36:56.272559    1152 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 16 18:36:56 test-preload-747936 kubelet[1152]: I1016 18:36:56.274195    1152 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 16 18:36:56 test-preload-747936 kubelet[1152]: I1016 18:36:56.276115    1152 setters.go:602] "Node became not ready" node="test-preload-747936" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-16T18:36:56Z","lastTransitionTime":"2025-10-16T18:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Oct 16 18:36:56 test-preload-747936 kubelet[1152]: E1016 18:36:56.707985    1152 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 16 18:36:56 test-preload-747936 kubelet[1152]: E1016 18:36:56.708236    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3c92d0b3-f8f4-45cf-9353-43f4b1f26dde-config-volume podName:3c92d0b3-f8f4-45cf-9353-43f4b1f26dde nodeName:}" failed. No retries permitted until 2025-10-16 18:36:57.708207217 +0000 UTC m=+6.719295945 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3c92d0b3-f8f4-45cf-9353-43f4b1f26dde-config-volume") pod "coredns-668d6bf9bc-zfnfj" (UID: "3c92d0b3-f8f4-45cf-9353-43f4b1f26dde") : object "kube-system"/"coredns" not registered
	Oct 16 18:36:57 test-preload-747936 kubelet[1152]: I1016 18:36:57.665779    1152 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Oct 16 18:36:57 test-preload-747936 kubelet[1152]: E1016 18:36:57.715772    1152 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 16 18:36:57 test-preload-747936 kubelet[1152]: E1016 18:36:57.715863    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3c92d0b3-f8f4-45cf-9353-43f4b1f26dde-config-volume podName:3c92d0b3-f8f4-45cf-9353-43f4b1f26dde nodeName:}" failed. No retries permitted until 2025-10-16 18:36:59.715849241 +0000 UTC m=+8.726937982 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3c92d0b3-f8f4-45cf-9353-43f4b1f26dde-config-volume") pod "coredns-668d6bf9bc-zfnfj" (UID: "3c92d0b3-f8f4-45cf-9353-43f4b1f26dde") : object "kube-system"/"coredns" not registered
	Oct 16 18:37:01 test-preload-747936 kubelet[1152]: E1016 18:37:01.148161    1152 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760639821147706005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 16 18:37:01 test-preload-747936 kubelet[1152]: E1016 18:37:01.148187    1152 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760639821147706005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 16 18:37:02 test-preload-747936 kubelet[1152]: I1016 18:37:02.241934    1152 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 16 18:37:05 test-preload-747936 kubelet[1152]: I1016 18:37:05.675492    1152 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 16 18:37:11 test-preload-747936 kubelet[1152]: E1016 18:37:11.150488    1152 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760639831149683706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 16 18:37:11 test-preload-747936 kubelet[1152]: E1016 18:37:11.150528    1152 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760639831149683706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [502728fd15a4f7e34c4c7810ad019d662955328cb012921916742a60526a1f86] <==
	I1016 18:36:56.701699       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-747936 -n test-preload-747936
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-747936 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-747936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-747936
--- FAIL: TestPreload (168.76s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (64.96s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-050003 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-050003 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m0.200050688s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-050003] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21738-8816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-050003" primary control-plane node in "pause-050003" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-050003" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:43:49.413112   51557 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:43:49.413434   51557 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:43:49.413443   51557 out.go:374] Setting ErrFile to fd 2...
	I1016 18:43:49.413447   51557 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:43:49.413659   51557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8816/.minikube/bin
	I1016 18:43:49.414176   51557 out.go:368] Setting JSON to false
	I1016 18:43:49.415365   51557 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5167,"bootTime":1760635062,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:43:49.415456   51557 start.go:141] virtualization: kvm guest
	I1016 18:43:49.417174   51557 out.go:179] * [pause-050003] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:43:49.418445   51557 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:43:49.418466   51557 notify.go:220] Checking for updates...
	I1016 18:43:49.420660   51557 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:43:49.421851   51557 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8816/kubeconfig
	I1016 18:43:49.423042   51557 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8816/.minikube
	I1016 18:43:49.424070   51557 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:43:49.428389   51557 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:43:49.430413   51557 config.go:182] Loaded profile config "pause-050003": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:43:49.431033   51557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:43:49.431141   51557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:43:49.452461   51557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34483
	I1016 18:43:49.453095   51557 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:43:49.453780   51557 main.go:141] libmachine: Using API Version  1
	I1016 18:43:49.453820   51557 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:43:49.454446   51557 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:43:49.454866   51557 main.go:141] libmachine: (pause-050003) Calling .DriverName
	I1016 18:43:49.455142   51557 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:43:49.455435   51557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:43:49.455472   51557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:43:49.470045   51557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42925
	I1016 18:43:49.470661   51557 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:43:49.471229   51557 main.go:141] libmachine: Using API Version  1
	I1016 18:43:49.471249   51557 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:43:49.471778   51557 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:43:49.472004   51557 main.go:141] libmachine: (pause-050003) Calling .DriverName
	I1016 18:43:49.516292   51557 out.go:179] * Using the kvm2 driver based on existing profile
	I1016 18:43:49.517377   51557 start.go:305] selected driver: kvm2
	I1016 18:43:49.517394   51557 start.go:925] validating driver "kvm2" against &{Name:pause-050003 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-050003 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:43:49.517558   51557 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:43:49.517939   51557 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:43:49.518027   51557 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21738-8816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1016 18:43:49.534747   51557 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1016 18:43:49.534783   51557 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21738-8816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1016 18:43:49.555609   51557 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1016 18:43:49.556453   51557 cni.go:84] Creating CNI manager for ""
	I1016 18:43:49.556506   51557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1016 18:43:49.556581   51557 start.go:349] cluster config:
	{Name:pause-050003 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-050003 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:43:49.556753   51557 iso.go:125] acquiring lock: {Name:mke23fa091b5b2529e94c2fba7379020f81892c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:43:49.560974   51557 out.go:179] * Starting "pause-050003" primary control-plane node in "pause-050003" cluster
	I1016 18:43:49.562163   51557 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:43:49.562199   51557 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1016 18:43:49.562206   51557 cache.go:58] Caching tarball of preloaded images
	I1016 18:43:49.562300   51557 preload.go:233] Found /home/jenkins/minikube-integration/21738-8816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1016 18:43:49.562311   51557 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:43:49.562429   51557 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/config.json ...
	I1016 18:43:49.562707   51557 start.go:360] acquireMachinesLock for pause-050003: {Name:mkfc8a48414152b8c16845fb35ed65ca3f42bae5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1016 18:43:49.562765   51557 start.go:364] duration metric: took 34.927µs to acquireMachinesLock for "pause-050003"
	I1016 18:43:49.562786   51557 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:43:49.562805   51557 fix.go:54] fixHost starting: 
	I1016 18:43:49.563065   51557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:43:49.563104   51557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:43:49.578401   51557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45157
	I1016 18:43:49.579021   51557 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:43:49.579926   51557 main.go:141] libmachine: Using API Version  1
	I1016 18:43:49.579949   51557 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:43:49.580461   51557 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:43:49.580653   51557 main.go:141] libmachine: (pause-050003) Calling .DriverName
	I1016 18:43:49.580833   51557 main.go:141] libmachine: (pause-050003) Calling .GetState
	I1016 18:43:49.583311   51557 fix.go:112] recreateIfNeeded on pause-050003: state=Running err=<nil>
	W1016 18:43:49.583346   51557 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:43:49.585027   51557 out.go:252] * Updating the running kvm2 "pause-050003" VM ...
	I1016 18:43:49.585077   51557 machine.go:93] provisionDockerMachine start ...
	I1016 18:43:49.585095   51557 main.go:141] libmachine: (pause-050003) Calling .DriverName
	I1016 18:43:49.585312   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHHostname
	I1016 18:43:49.588785   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:49.589250   51557 main.go:141] libmachine: (pause-050003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:ef:19", ip: ""} in network mk-pause-050003: {Iface:virbr1 ExpiryTime:2025-10-16 19:43:07 +0000 UTC Type:0 Mac:52:54:00:cc:ef:19 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-050003 Clientid:01:52:54:00:cc:ef:19}
	I1016 18:43:49.589281   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined IP address 192.168.39.142 and MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:49.589503   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHPort
	I1016 18:43:49.589690   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHKeyPath
	I1016 18:43:49.589855   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHKeyPath
	I1016 18:43:49.590028   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHUsername
	I1016 18:43:49.590201   51557 main.go:141] libmachine: Using SSH client type: native
	I1016 18:43:49.590523   51557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I1016 18:43:49.590542   51557 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:43:49.714696   51557 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-050003
	
	I1016 18:43:49.714733   51557 main.go:141] libmachine: (pause-050003) Calling .GetMachineName
	I1016 18:43:49.715056   51557 buildroot.go:166] provisioning hostname "pause-050003"
	I1016 18:43:49.715085   51557 main.go:141] libmachine: (pause-050003) Calling .GetMachineName
	I1016 18:43:49.715536   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHHostname
	I1016 18:43:49.719250   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:49.719693   51557 main.go:141] libmachine: (pause-050003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:ef:19", ip: ""} in network mk-pause-050003: {Iface:virbr1 ExpiryTime:2025-10-16 19:43:07 +0000 UTC Type:0 Mac:52:54:00:cc:ef:19 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-050003 Clientid:01:52:54:00:cc:ef:19}
	I1016 18:43:49.719715   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined IP address 192.168.39.142 and MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:49.719935   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHPort
	I1016 18:43:49.720139   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHKeyPath
	I1016 18:43:49.720302   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHKeyPath
	I1016 18:43:49.720444   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHUsername
	I1016 18:43:49.720680   51557 main.go:141] libmachine: Using SSH client type: native
	I1016 18:43:49.720910   51557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I1016 18:43:49.720924   51557 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-050003 && echo "pause-050003" | sudo tee /etc/hostname
	I1016 18:43:49.863398   51557 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-050003
	
	I1016 18:43:49.863433   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHHostname
	I1016 18:43:49.867621   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:49.868229   51557 main.go:141] libmachine: (pause-050003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:ef:19", ip: ""} in network mk-pause-050003: {Iface:virbr1 ExpiryTime:2025-10-16 19:43:07 +0000 UTC Type:0 Mac:52:54:00:cc:ef:19 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-050003 Clientid:01:52:54:00:cc:ef:19}
	I1016 18:43:49.868309   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined IP address 192.168.39.142 and MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:49.868744   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHPort
	I1016 18:43:49.868984   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHKeyPath
	I1016 18:43:49.869199   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHKeyPath
	I1016 18:43:49.869390   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHUsername
	I1016 18:43:49.869596   51557 main.go:141] libmachine: Using SSH client type: native
	I1016 18:43:49.869898   51557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I1016 18:43:49.869928   51557 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-050003' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-050003/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-050003' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:43:49.993053   51557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:43:49.993195   51557 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8816/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8816/.minikube}
	I1016 18:43:49.993226   51557 buildroot.go:174] setting up certificates
	I1016 18:43:49.993240   51557 provision.go:84] configureAuth start
	I1016 18:43:49.993255   51557 main.go:141] libmachine: (pause-050003) Calling .GetMachineName
	I1016 18:43:49.993482   51557 main.go:141] libmachine: (pause-050003) Calling .GetIP
	I1016 18:43:49.999878   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:50.000744   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHHostname
	I1016 18:43:50.000832   51557 main.go:141] libmachine: (pause-050003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:ef:19", ip: ""} in network mk-pause-050003: {Iface:virbr1 ExpiryTime:2025-10-16 19:43:07 +0000 UTC Type:0 Mac:52:54:00:cc:ef:19 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-050003 Clientid:01:52:54:00:cc:ef:19}
	I1016 18:43:50.000878   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined IP address 192.168.39.142 and MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:50.004561   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:50.005210   51557 main.go:141] libmachine: (pause-050003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:ef:19", ip: ""} in network mk-pause-050003: {Iface:virbr1 ExpiryTime:2025-10-16 19:43:07 +0000 UTC Type:0 Mac:52:54:00:cc:ef:19 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-050003 Clientid:01:52:54:00:cc:ef:19}
	I1016 18:43:50.005250   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined IP address 192.168.39.142 and MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:50.005500   51557 provision.go:143] copyHostCerts
	I1016 18:43:50.005565   51557 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8816/.minikube/ca.pem, removing ...
	I1016 18:43:50.005590   51557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8816/.minikube/ca.pem
	I1016 18:43:50.005684   51557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8816/.minikube/ca.pem (1078 bytes)
	I1016 18:43:50.005868   51557 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8816/.minikube/cert.pem, removing ...
	I1016 18:43:50.005881   51557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8816/.minikube/cert.pem
	I1016 18:43:50.005934   51557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8816/.minikube/cert.pem (1123 bytes)
	I1016 18:43:50.006026   51557 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8816/.minikube/key.pem, removing ...
	I1016 18:43:50.006037   51557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8816/.minikube/key.pem
	I1016 18:43:50.006076   51557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8816/.minikube/key.pem (1675 bytes)
	I1016 18:43:50.006180   51557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca-key.pem org=jenkins.pause-050003 san=[127.0.0.1 192.168.39.142 localhost minikube pause-050003]
	I1016 18:43:50.181810   51557 provision.go:177] copyRemoteCerts
	I1016 18:43:50.181899   51557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:43:50.181931   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHHostname
	I1016 18:43:50.186052   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:50.186641   51557 main.go:141] libmachine: (pause-050003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:ef:19", ip: ""} in network mk-pause-050003: {Iface:virbr1 ExpiryTime:2025-10-16 19:43:07 +0000 UTC Type:0 Mac:52:54:00:cc:ef:19 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-050003 Clientid:01:52:54:00:cc:ef:19}
	I1016 18:43:50.186703   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined IP address 192.168.39.142 and MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:50.187076   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHPort
	I1016 18:43:50.187342   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHKeyPath
	I1016 18:43:50.187547   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHUsername
	I1016 18:43:50.187714   51557 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/pause-050003/id_rsa Username:docker}
	I1016 18:43:50.285297   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 18:43:50.331491   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1016 18:43:50.370259   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:43:50.416493   51557 provision.go:87] duration metric: took 423.23914ms to configureAuth
	I1016 18:43:50.416530   51557 buildroot.go:189] setting minikube options for container-runtime
	I1016 18:43:50.416853   51557 config.go:182] Loaded profile config "pause-050003": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:43:50.416964   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHHostname
	I1016 18:43:50.421058   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:50.421709   51557 main.go:141] libmachine: (pause-050003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:ef:19", ip: ""} in network mk-pause-050003: {Iface:virbr1 ExpiryTime:2025-10-16 19:43:07 +0000 UTC Type:0 Mac:52:54:00:cc:ef:19 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-050003 Clientid:01:52:54:00:cc:ef:19}
	I1016 18:43:50.421740   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined IP address 192.168.39.142 and MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:50.421977   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHPort
	I1016 18:43:50.422228   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHKeyPath
	I1016 18:43:50.422448   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHKeyPath
	I1016 18:43:50.422660   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHUsername
	I1016 18:43:50.422868   51557 main.go:141] libmachine: Using SSH client type: native
	I1016 18:43:50.423103   51557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I1016 18:43:50.423137   51557 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:43:55.973018   51557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:43:55.973049   51557 machine.go:96] duration metric: took 6.387962202s to provisionDockerMachine
	I1016 18:43:55.973065   51557 start.go:293] postStartSetup for "pause-050003" (driver="kvm2")
	I1016 18:43:55.973079   51557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:43:55.973100   51557 main.go:141] libmachine: (pause-050003) Calling .DriverName
	I1016 18:43:55.973471   51557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:43:55.973582   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHHostname
	I1016 18:43:55.977018   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:55.977495   51557 main.go:141] libmachine: (pause-050003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:ef:19", ip: ""} in network mk-pause-050003: {Iface:virbr1 ExpiryTime:2025-10-16 19:43:07 +0000 UTC Type:0 Mac:52:54:00:cc:ef:19 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-050003 Clientid:01:52:54:00:cc:ef:19}
	I1016 18:43:55.977523   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined IP address 192.168.39.142 and MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:55.977814   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHPort
	I1016 18:43:55.977994   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHKeyPath
	I1016 18:43:55.978108   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHUsername
	I1016 18:43:55.978307   51557 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/pause-050003/id_rsa Username:docker}
	I1016 18:43:56.065281   51557 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:43:56.070990   51557 info.go:137] Remote host: Buildroot 2025.02
	I1016 18:43:56.071022   51557 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8816/.minikube/addons for local assets ...
	I1016 18:43:56.071080   51557 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8816/.minikube/files for local assets ...
	I1016 18:43:56.071174   51557 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-8816/.minikube/files/etc/ssl/certs/127672.pem -> 127672.pem in /etc/ssl/certs
	I1016 18:43:56.071285   51557 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:43:56.083801   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/files/etc/ssl/certs/127672.pem --> /etc/ssl/certs/127672.pem (1708 bytes)
	I1016 18:43:56.114053   51557 start.go:296] duration metric: took 140.974319ms for postStartSetup
	I1016 18:43:56.114097   51557 fix.go:56] duration metric: took 6.551301078s for fixHost
	I1016 18:43:56.114135   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHHostname
	I1016 18:43:56.117286   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:56.117770   51557 main.go:141] libmachine: (pause-050003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:ef:19", ip: ""} in network mk-pause-050003: {Iface:virbr1 ExpiryTime:2025-10-16 19:43:07 +0000 UTC Type:0 Mac:52:54:00:cc:ef:19 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-050003 Clientid:01:52:54:00:cc:ef:19}
	I1016 18:43:56.117800   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined IP address 192.168.39.142 and MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:56.118064   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHPort
	I1016 18:43:56.118325   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHKeyPath
	I1016 18:43:56.118527   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHKeyPath
	I1016 18:43:56.118721   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHUsername
	I1016 18:43:56.118930   51557 main.go:141] libmachine: Using SSH client type: native
	I1016 18:43:56.119147   51557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I1016 18:43:56.119157   51557 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1016 18:43:56.228862   51557 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760640236.225729056
	
	I1016 18:43:56.228894   51557 fix.go:216] guest clock: 1760640236.225729056
	I1016 18:43:56.228901   51557 fix.go:229] Guest: 2025-10-16 18:43:56.225729056 +0000 UTC Remote: 2025-10-16 18:43:56.11410247 +0000 UTC m=+6.746430963 (delta=111.626586ms)
	I1016 18:43:56.228920   51557 fix.go:200] guest clock delta is within tolerance: 111.626586ms
	I1016 18:43:56.228926   51557 start.go:83] releasing machines lock for "pause-050003", held for 6.666148375s
	I1016 18:43:56.228953   51557 main.go:141] libmachine: (pause-050003) Calling .DriverName
	I1016 18:43:56.229239   51557 main.go:141] libmachine: (pause-050003) Calling .GetIP
	I1016 18:43:56.232855   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:56.233338   51557 main.go:141] libmachine: (pause-050003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:ef:19", ip: ""} in network mk-pause-050003: {Iface:virbr1 ExpiryTime:2025-10-16 19:43:07 +0000 UTC Type:0 Mac:52:54:00:cc:ef:19 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-050003 Clientid:01:52:54:00:cc:ef:19}
	I1016 18:43:56.233382   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined IP address 192.168.39.142 and MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:56.233589   51557 main.go:141] libmachine: (pause-050003) Calling .DriverName
	I1016 18:43:56.234248   51557 main.go:141] libmachine: (pause-050003) Calling .DriverName
	I1016 18:43:56.234429   51557 main.go:141] libmachine: (pause-050003) Calling .DriverName
	I1016 18:43:56.234539   51557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:43:56.234609   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHHostname
	I1016 18:43:56.234706   51557 ssh_runner.go:195] Run: cat /version.json
	I1016 18:43:56.234734   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHHostname
	I1016 18:43:56.237864   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:56.238195   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:56.238343   51557 main.go:141] libmachine: (pause-050003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:ef:19", ip: ""} in network mk-pause-050003: {Iface:virbr1 ExpiryTime:2025-10-16 19:43:07 +0000 UTC Type:0 Mac:52:54:00:cc:ef:19 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-050003 Clientid:01:52:54:00:cc:ef:19}
	I1016 18:43:56.238368   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined IP address 192.168.39.142 and MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:56.238556   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHPort
	I1016 18:43:56.238714   51557 main.go:141] libmachine: (pause-050003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:ef:19", ip: ""} in network mk-pause-050003: {Iface:virbr1 ExpiryTime:2025-10-16 19:43:07 +0000 UTC Type:0 Mac:52:54:00:cc:ef:19 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-050003 Clientid:01:52:54:00:cc:ef:19}
	I1016 18:43:56.238734   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined IP address 192.168.39.142 and MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:56.238760   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHKeyPath
	I1016 18:43:56.238966   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHUsername
	I1016 18:43:56.238971   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHPort
	I1016 18:43:56.239172   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHKeyPath
	I1016 18:43:56.239200   51557 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/pause-050003/id_rsa Username:docker}
	I1016 18:43:56.239315   51557 main.go:141] libmachine: (pause-050003) Calling .GetSSHUsername
	I1016 18:43:56.239464   51557 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/pause-050003/id_rsa Username:docker}
	I1016 18:43:56.352242   51557 ssh_runner.go:195] Run: systemctl --version
	I1016 18:43:56.358766   51557 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:43:56.515362   51557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:43:56.522784   51557 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:43:56.522869   51557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:43:56.534473   51557 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:43:56.534501   51557 start.go:495] detecting cgroup driver to use...
	I1016 18:43:56.534581   51557 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:43:56.553744   51557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:43:56.571324   51557 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:43:56.571394   51557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:43:56.590787   51557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:43:56.607918   51557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:43:56.793583   51557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:43:56.969137   51557 docker.go:234] disabling docker service ...
	I1016 18:43:56.969209   51557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:43:56.999203   51557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:43:57.017424   51557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:43:57.214238   51557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:43:57.384650   51557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:43:57.401679   51557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:43:57.425892   51557 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:43:57.425981   51557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:43:57.441992   51557 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 18:43:57.442059   51557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:43:57.456800   51557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:43:57.469404   51557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:43:57.482112   51557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:43:57.496036   51557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:43:57.509244   51557 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:43:57.523038   51557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:43:57.535074   51557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:43:57.547949   51557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:43:57.559191   51557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:43:57.725713   51557 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:43:58.127390   51557 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:43:58.127463   51557 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:43:58.139195   51557 start.go:563] Will wait 60s for crictl version
	I1016 18:43:58.139251   51557 ssh_runner.go:195] Run: which crictl
	I1016 18:43:58.147913   51557 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1016 18:43:58.251230   51557 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1016 18:43:58.251330   51557 ssh_runner.go:195] Run: crio --version
	I1016 18:43:58.346649   51557 ssh_runner.go:195] Run: crio --version
	I1016 18:43:58.426219   51557 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1016 18:43:58.427633   51557 main.go:141] libmachine: (pause-050003) Calling .GetIP
	I1016 18:43:58.432024   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:58.432633   51557 main.go:141] libmachine: (pause-050003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:ef:19", ip: ""} in network mk-pause-050003: {Iface:virbr1 ExpiryTime:2025-10-16 19:43:07 +0000 UTC Type:0 Mac:52:54:00:cc:ef:19 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-050003 Clientid:01:52:54:00:cc:ef:19}
	I1016 18:43:58.432664   51557 main.go:141] libmachine: (pause-050003) DBG | domain pause-050003 has defined IP address 192.168.39.142 and MAC address 52:54:00:cc:ef:19 in network mk-pause-050003
	I1016 18:43:58.433041   51557 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1016 18:43:58.443041   51557 kubeadm.go:883] updating cluster {Name:pause-050003 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-050003 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:43:58.443249   51557 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:43:58.443316   51557 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:43:58.640795   51557 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:43:58.640819   51557 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:43:58.640879   51557 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:43:58.772675   51557 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:43:58.772708   51557 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:43:58.772720   51557 kubeadm.go:934] updating node { 192.168.39.142 8443 v1.34.1 crio true true} ...
	I1016 18:43:58.772874   51557 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-050003 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-050003 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:43:58.772965   51557 ssh_runner.go:195] Run: crio config
	I1016 18:43:58.855957   51557 cni.go:84] Creating CNI manager for ""
	I1016 18:43:58.855992   51557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1016 18:43:58.856019   51557 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:43:58.856050   51557 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.142 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-050003 NodeName:pause-050003 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:43:58.856236   51557 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-050003"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.142"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.142"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:43:58.856324   51557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:43:58.901630   51557 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:43:58.901690   51557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 18:43:58.932516   51557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1016 18:43:58.990109   51557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:43:59.052365   51557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1016 18:43:59.116418   51557 ssh_runner.go:195] Run: grep 192.168.39.142	control-plane.minikube.internal$ /etc/hosts
	I1016 18:43:59.132019   51557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:43:59.518745   51557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:43:59.549623   51557 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003 for IP: 192.168.39.142
	I1016 18:43:59.549648   51557 certs.go:195] generating shared ca certs ...
	I1016 18:43:59.549667   51557 certs.go:227] acquiring lock for ca certs: {Name:mkad193a0fb33fec0ea18d9a56f494b9b8779adb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:43:59.549845   51557 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8816/.minikube/ca.key
	I1016 18:43:59.549903   51557 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8816/.minikube/proxy-client-ca.key
	I1016 18:43:59.549916   51557 certs.go:257] generating profile certs ...
	I1016 18:43:59.550031   51557 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/client.key
	I1016 18:43:59.550136   51557 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/apiserver.key.16c57672
	I1016 18:43:59.550205   51557 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/proxy-client.key
	I1016 18:43:59.550341   51557 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/12767.pem (1338 bytes)
	W1016 18:43:59.550377   51557 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-8816/.minikube/certs/12767_empty.pem, impossibly tiny 0 bytes
	I1016 18:43:59.550386   51557 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 18:43:59.550412   51557 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca.pem (1078 bytes)
	I1016 18:43:59.550439   51557 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:43:59.550466   51557 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/key.pem (1675 bytes)
	I1016 18:43:59.550521   51557 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/files/etc/ssl/certs/127672.pem (1708 bytes)
	I1016 18:43:59.551407   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:43:59.587830   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 18:43:59.624839   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:43:59.669182   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 18:43:59.717292   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1016 18:43:59.782224   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1016 18:43:59.824708   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:43:59.875648   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1016 18:43:59.927171   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/files/etc/ssl/certs/127672.pem --> /usr/share/ca-certificates/127672.pem (1708 bytes)
	I1016 18:43:59.981017   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:44:00.048221   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/certs/12767.pem --> /usr/share/ca-certificates/12767.pem (1338 bytes)
	I1016 18:44:00.111635   51557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:44:00.152149   51557 ssh_runner.go:195] Run: openssl version
	I1016 18:44:00.160745   51557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/127672.pem && ln -fs /usr/share/ca-certificates/127672.pem /etc/ssl/certs/127672.pem"
	I1016 18:44:00.182087   51557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127672.pem
	I1016 18:44:00.190340   51557 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:53 /usr/share/ca-certificates/127672.pem
	I1016 18:44:00.190411   51557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127672.pem
	I1016 18:44:00.200034   51557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/127672.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:44:00.219488   51557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:44:00.235039   51557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:44:00.243731   51557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:44:00.243803   51557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:44:00.253561   51557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:44:00.273377   51557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12767.pem && ln -fs /usr/share/ca-certificates/12767.pem /etc/ssl/certs/12767.pem"
	I1016 18:44:00.291619   51557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12767.pem
	I1016 18:44:00.298352   51557 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:53 /usr/share/ca-certificates/12767.pem
	I1016 18:44:00.298443   51557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12767.pem
	I1016 18:44:00.308045   51557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12767.pem /etc/ssl/certs/51391683.0"
	I1016 18:44:00.325585   51557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:44:00.338672   51557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:44:00.364677   51557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:44:00.385675   51557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:44:00.400524   51557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:44:00.416528   51557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:44:00.430404   51557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:44:00.439360   51557 kubeadm.go:400] StartCluster: {Name:pause-050003 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-050003 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:44:00.439523   51557 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:44:00.439649   51557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:44:00.487551   51557 cri.go:89] found id: "f9fcc7fcff99c9c9feff79f0d00665cef6acfe8dcb7ca44d012a9367a1f5cb68"
	I1016 18:44:00.487580   51557 cri.go:89] found id: "ae46e480b4faf6eb9510232952087132b353b9d324bce2fee0274bc2b1607112"
	I1016 18:44:00.487587   51557 cri.go:89] found id: "07b011184f53e19705ba685e0bdc155d5dd61f83a23a933e33833c94189a862c"
	I1016 18:44:00.487592   51557 cri.go:89] found id: "138900f082cb91cf15dec293a7219ac9ef1d0562c26a936b116159079c13521b"
	I1016 18:44:00.487596   51557 cri.go:89] found id: "d14ab5accd20dd228f5abd44ad8759e00e8f52bfe7cbbf70e751c88e8a19168a"
	I1016 18:44:00.487612   51557 cri.go:89] found id: "af07ef62c7d7cb7433c0d0a5080931686ac5ff7997a17a4f3faa1c86d84f1fe3"
	I1016 18:44:00.487617   51557 cri.go:89] found id: "d37befefb8cb45666ac08e050ad83458852f70cdc44daac678980be7982ed88e"
	I1016 18:44:00.487620   51557 cri.go:89] found id: "1b2f1e517947eacc22d2130f03c0c2330bf0d0ce34d5994124fc25a5cd9442fc"
	I1016 18:44:00.487624   51557 cri.go:89] found id: "8826fedd55538ba5a818b005995a769c41fd97532ab28f367823d54da6f1ce1a"
	I1016 18:44:00.487632   51557 cri.go:89] found id: "e114ce1dfa21b0e5df66c962c5a4f47663c6d49a358bbd9c96a5c4cb97d28bc3"
	I1016 18:44:00.487636   51557 cri.go:89] found id: "a47769b140ecbbfc798478b8f5f1349bbc5c366196938972d84a79f08bd90d47"
	I1016 18:44:00.487640   51557 cri.go:89] found id: "93a48ed4131a6a79c5441907006747d3ef82a04f382eb26fb6c08e2001f9f2f4"
	I1016 18:44:00.487644   51557 cri.go:89] found id: ""
	I1016 18:44:00.487695   51557 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-050003 -n pause-050003
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-050003 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-050003 logs -n 25: (1.666287837s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-490378 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                    │ NoKubernetes-490378       │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:41 UTC │
	│ ssh     │ cert-options-605457 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                        │ cert-options-605457       │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:41 UTC │
	│ ssh     │ -p cert-options-605457 -- sudo cat /etc/kubernetes/admin.conf                                                                                                      │ cert-options-605457       │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:41 UTC │
	│ delete  │ -p cert-options-605457                                                                                                                                             │ cert-options-605457       │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:41 UTC │
	│ start   │ -p kubernetes-upgrade-698479 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-698479 │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:42 UTC │
	│ start   │ -p running-upgrade-715574 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ running-upgrade-715574    │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:42 UTC │
	│ ssh     │ -p NoKubernetes-490378 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-490378       │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │                     │
	│ stop    │ -p NoKubernetes-490378                                                                                                                                             │ NoKubernetes-490378       │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:41 UTC │
	│ start   │ -p NoKubernetes-490378 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                         │ NoKubernetes-490378       │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:42 UTC │
	│ stop    │ -p kubernetes-upgrade-698479                                                                                                                                       │ kubernetes-upgrade-698479 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:42 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-715574 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ running-upgrade-715574    │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-698479 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-698479 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:43 UTC │
	│ delete  │ -p running-upgrade-715574                                                                                                                                          │ running-upgrade-715574    │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:42 UTC │
	│ ssh     │ -p NoKubernetes-490378 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-490378       │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │                     │
	│ delete  │ -p NoKubernetes-490378                                                                                                                                             │ NoKubernetes-490378       │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:42 UTC │
	│ start   │ -p pause-050003 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ pause-050003              │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:43 UTC │
	│ start   │ -p stopped-upgrade-806110 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ stopped-upgrade-806110    │ jenkins │ v1.32.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:43 UTC │
	│ start   │ -p kubernetes-upgrade-698479 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ kubernetes-upgrade-698479 │ jenkins │ v1.37.0 │ 16 Oct 25 18:43 UTC │                     │
	│ start   │ -p kubernetes-upgrade-698479 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-698479 │ jenkins │ v1.37.0 │ 16 Oct 25 18:43 UTC │ 16 Oct 25 18:43 UTC │
	│ start   │ -p pause-050003 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ pause-050003              │ jenkins │ v1.37.0 │ 16 Oct 25 18:43 UTC │ 16 Oct 25 18:44 UTC │
	│ stop    │ stopped-upgrade-806110 stop                                                                                                                                        │ stopped-upgrade-806110    │ jenkins │ v1.32.0 │ 16 Oct 25 18:43 UTC │ 16 Oct 25 18:43 UTC │
	│ delete  │ -p kubernetes-upgrade-698479                                                                                                                                       │ kubernetes-upgrade-698479 │ jenkins │ v1.37.0 │ 16 Oct 25 18:43 UTC │ 16 Oct 25 18:43 UTC │
	│ start   │ -p auto-557854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                  │ auto-557854               │ jenkins │ v1.37.0 │ 16 Oct 25 18:43 UTC │                     │
	│ start   │ -p stopped-upgrade-806110 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ stopped-upgrade-806110    │ jenkins │ v1.37.0 │ 16 Oct 25 18:43 UTC │                     │
	│ start   │ -p cert-expiration-854144 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ cert-expiration-854144    │ jenkins │ v1.37.0 │ 16 Oct 25 18:44 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:44:03
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:44:03.688320   51968 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:44:03.688743   51968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:44:03.688746   51968 out.go:374] Setting ErrFile to fd 2...
	I1016 18:44:03.688751   51968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:44:03.689289   51968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8816/.minikube/bin
	I1016 18:44:03.690257   51968 out.go:368] Setting JSON to false
	I1016 18:44:03.691231   51968 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5182,"bootTime":1760635062,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:44:03.691312   51968 start.go:141] virtualization: kvm guest
	I1016 18:44:03.779592   51968 out.go:179] * [cert-expiration-854144] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:44:03.874926   51968 notify.go:220] Checking for updates...
	I1016 18:44:03.915088   51968 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:44:03.984101   51968 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:44:03.998559   51968 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8816/kubeconfig
	I1016 18:44:04.060980   51968 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8816/.minikube
	I1016 18:44:04.071973   51968 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:44:04.091895   51968 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:44:04.093766   51968 config.go:182] Loaded profile config "cert-expiration-854144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:44:04.094481   51968 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:44:04.094539   51968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:44:04.111494   51968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35591
	I1016 18:44:04.112104   51968 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:44:04.112725   51968 main.go:141] libmachine: Using API Version  1
	I1016 18:44:04.112740   51968 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:44:04.113109   51968 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:44:04.113335   51968 main.go:141] libmachine: (cert-expiration-854144) Calling .DriverName
	I1016 18:44:04.113674   51968 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:44:04.114111   51968 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:44:04.114172   51968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:44:04.127969   51968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35239
	I1016 18:44:04.128489   51968 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:44:04.129022   51968 main.go:141] libmachine: Using API Version  1
	I1016 18:44:04.129058   51968 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:44:04.129561   51968 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:44:04.129833   51968 main.go:141] libmachine: (cert-expiration-854144) Calling .DriverName
	I1016 18:44:04.167143   51968 out.go:179] * Using the kvm2 driver based on existing profile
	I1016 18:44:04.168282   51968 start.go:305] selected driver: kvm2
	I1016 18:44:04.168289   51968 start.go:925] validating driver "kvm2" against &{Name:cert-expiration-854144 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.34.1 ClusterName:cert-expiration-854144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.148 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:44:04.168382   51968 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:44:04.169248   51968 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:44:04.169340   51968 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21738-8816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1016 18:44:04.184132   51968 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1016 18:44:04.184171   51968 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21738-8816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1016 18:44:04.198562   51968 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1016 18:44:04.198962   51968 cni.go:84] Creating CNI manager for ""
	I1016 18:44:04.199015   51968 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1016 18:44:04.199076   51968 start.go:349] cluster config:
	{Name:cert-expiration-854144 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-854144 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.148 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:44:04.199216   51968 iso.go:125] acquiring lock: {Name:mke23fa091b5b2529e94c2fba7379020f81892c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:44:04.201025   51968 out.go:179] * Starting "cert-expiration-854144" primary control-plane node in "cert-expiration-854144" cluster
	I1016 18:43:59.518745   51557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:43:59.549623   51557 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003 for IP: 192.168.39.142
	I1016 18:43:59.549648   51557 certs.go:195] generating shared ca certs ...
	I1016 18:43:59.549667   51557 certs.go:227] acquiring lock for ca certs: {Name:mkad193a0fb33fec0ea18d9a56f494b9b8779adb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:43:59.549845   51557 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8816/.minikube/ca.key
	I1016 18:43:59.549903   51557 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8816/.minikube/proxy-client-ca.key
	I1016 18:43:59.549916   51557 certs.go:257] generating profile certs ...
	I1016 18:43:59.550031   51557 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/client.key
	I1016 18:43:59.550136   51557 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/apiserver.key.16c57672
	I1016 18:43:59.550205   51557 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/proxy-client.key
	I1016 18:43:59.550341   51557 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/12767.pem (1338 bytes)
	W1016 18:43:59.550377   51557 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-8816/.minikube/certs/12767_empty.pem, impossibly tiny 0 bytes
	I1016 18:43:59.550386   51557 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 18:43:59.550412   51557 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca.pem (1078 bytes)
	I1016 18:43:59.550439   51557 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:43:59.550466   51557 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/key.pem (1675 bytes)
	I1016 18:43:59.550521   51557 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/files/etc/ssl/certs/127672.pem (1708 bytes)
	I1016 18:43:59.551407   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:43:59.587830   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 18:43:59.624839   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:43:59.669182   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 18:43:59.717292   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1016 18:43:59.782224   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1016 18:43:59.824708   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:43:59.875648   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1016 18:43:59.927171   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/files/etc/ssl/certs/127672.pem --> /usr/share/ca-certificates/127672.pem (1708 bytes)
	I1016 18:43:59.981017   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:44:00.048221   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/certs/12767.pem --> /usr/share/ca-certificates/12767.pem (1338 bytes)
	I1016 18:44:00.111635   51557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:44:00.152149   51557 ssh_runner.go:195] Run: openssl version
	I1016 18:44:00.160745   51557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/127672.pem && ln -fs /usr/share/ca-certificates/127672.pem /etc/ssl/certs/127672.pem"
	I1016 18:44:00.182087   51557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127672.pem
	I1016 18:44:00.190340   51557 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:53 /usr/share/ca-certificates/127672.pem
	I1016 18:44:00.190411   51557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127672.pem
	I1016 18:44:00.200034   51557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/127672.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:44:00.219488   51557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:44:00.235039   51557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:44:00.243731   51557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:44:00.243803   51557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:44:00.253561   51557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:44:00.273377   51557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12767.pem && ln -fs /usr/share/ca-certificates/12767.pem /etc/ssl/certs/12767.pem"
	I1016 18:44:00.291619   51557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12767.pem
	I1016 18:44:00.298352   51557 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:53 /usr/share/ca-certificates/12767.pem
	I1016 18:44:00.298443   51557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12767.pem
	I1016 18:44:00.308045   51557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12767.pem /etc/ssl/certs/51391683.0"
	I1016 18:44:00.325585   51557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:44:00.338672   51557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:44:00.364677   51557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:44:00.385675   51557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:44:00.400524   51557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:44:00.416528   51557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:44:00.430404   51557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:44:00.439360   51557 kubeadm.go:400] StartCluster: {Name:pause-050003 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-050003 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:44:00.439523   51557 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:44:00.439649   51557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:44:00.487551   51557 cri.go:89] found id: "f9fcc7fcff99c9c9feff79f0d00665cef6acfe8dcb7ca44d012a9367a1f5cb68"
	I1016 18:44:00.487580   51557 cri.go:89] found id: "ae46e480b4faf6eb9510232952087132b353b9d324bce2fee0274bc2b1607112"
	I1016 18:44:00.487587   51557 cri.go:89] found id: "07b011184f53e19705ba685e0bdc155d5dd61f83a23a933e33833c94189a862c"
	I1016 18:44:00.487592   51557 cri.go:89] found id: "138900f082cb91cf15dec293a7219ac9ef1d0562c26a936b116159079c13521b"
	I1016 18:44:00.487596   51557 cri.go:89] found id: "d14ab5accd20dd228f5abd44ad8759e00e8f52bfe7cbbf70e751c88e8a19168a"
	I1016 18:44:00.487612   51557 cri.go:89] found id: "af07ef62c7d7cb7433c0d0a5080931686ac5ff7997a17a4f3faa1c86d84f1fe3"
	I1016 18:44:00.487617   51557 cri.go:89] found id: "d37befefb8cb45666ac08e050ad83458852f70cdc44daac678980be7982ed88e"
	I1016 18:44:00.487620   51557 cri.go:89] found id: "1b2f1e517947eacc22d2130f03c0c2330bf0d0ce34d5994124fc25a5cd9442fc"
	I1016 18:44:00.487624   51557 cri.go:89] found id: "8826fedd55538ba5a818b005995a769c41fd97532ab28f367823d54da6f1ce1a"
	I1016 18:44:00.487632   51557 cri.go:89] found id: "e114ce1dfa21b0e5df66c962c5a4f47663c6d49a358bbd9c96a5c4cb97d28bc3"
	I1016 18:44:00.487636   51557 cri.go:89] found id: "a47769b140ecbbfc798478b8f5f1349bbc5c366196938972d84a79f08bd90d47"
	I1016 18:44:00.487640   51557 cri.go:89] found id: "93a48ed4131a6a79c5441907006747d3ef82a04f382eb26fb6c08e2001f9f2f4"
	I1016 18:44:00.487644   51557 cri.go:89] found id: ""
	I1016 18:44:00.487695   51557 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-050003 -n pause-050003
helpers_test.go:269: (dbg) Run:  kubectl --context pause-050003 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-050003 -n pause-050003
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-050003 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-050003 logs -n 25: (1.755487067s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-490378 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                    │ NoKubernetes-490378       │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:41 UTC │
	│ ssh     │ cert-options-605457 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                        │ cert-options-605457       │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:41 UTC │
	│ ssh     │ -p cert-options-605457 -- sudo cat /etc/kubernetes/admin.conf                                                                                                      │ cert-options-605457       │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:41 UTC │
	│ delete  │ -p cert-options-605457                                                                                                                                             │ cert-options-605457       │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:41 UTC │
	│ start   │ -p kubernetes-upgrade-698479 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-698479 │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:42 UTC │
	│ start   │ -p running-upgrade-715574 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ running-upgrade-715574    │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:42 UTC │
	│ ssh     │ -p NoKubernetes-490378 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-490378       │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │                     │
	│ stop    │ -p NoKubernetes-490378                                                                                                                                             │ NoKubernetes-490378       │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:41 UTC │
	│ start   │ -p NoKubernetes-490378 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                         │ NoKubernetes-490378       │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:42 UTC │
	│ stop    │ -p kubernetes-upgrade-698479                                                                                                                                       │ kubernetes-upgrade-698479 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:42 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-715574 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ running-upgrade-715574    │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-698479 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-698479 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:43 UTC │
	│ delete  │ -p running-upgrade-715574                                                                                                                                          │ running-upgrade-715574    │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:42 UTC │
	│ ssh     │ -p NoKubernetes-490378 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-490378       │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │                     │
	│ delete  │ -p NoKubernetes-490378                                                                                                                                             │ NoKubernetes-490378       │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:42 UTC │
	│ start   │ -p pause-050003 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ pause-050003              │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:43 UTC │
	│ start   │ -p stopped-upgrade-806110 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ stopped-upgrade-806110    │ jenkins │ v1.32.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:43 UTC │
	│ start   │ -p kubernetes-upgrade-698479 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ kubernetes-upgrade-698479 │ jenkins │ v1.37.0 │ 16 Oct 25 18:43 UTC │                     │
	│ start   │ -p kubernetes-upgrade-698479 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-698479 │ jenkins │ v1.37.0 │ 16 Oct 25 18:43 UTC │ 16 Oct 25 18:43 UTC │
	│ start   │ -p pause-050003 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ pause-050003              │ jenkins │ v1.37.0 │ 16 Oct 25 18:43 UTC │ 16 Oct 25 18:44 UTC │
	│ stop    │ stopped-upgrade-806110 stop                                                                                                                                        │ stopped-upgrade-806110    │ jenkins │ v1.32.0 │ 16 Oct 25 18:43 UTC │ 16 Oct 25 18:43 UTC │
	│ delete  │ -p kubernetes-upgrade-698479                                                                                                                                       │ kubernetes-upgrade-698479 │ jenkins │ v1.37.0 │ 16 Oct 25 18:43 UTC │ 16 Oct 25 18:43 UTC │
	│ start   │ -p auto-557854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                  │ auto-557854               │ jenkins │ v1.37.0 │ 16 Oct 25 18:43 UTC │                     │
	│ start   │ -p stopped-upgrade-806110 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ stopped-upgrade-806110    │ jenkins │ v1.37.0 │ 16 Oct 25 18:43 UTC │                     │
	│ start   │ -p cert-expiration-854144 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ cert-expiration-854144    │ jenkins │ v1.37.0 │ 16 Oct 25 18:44 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:44:03
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:44:03.688320   51968 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:44:03.688743   51968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:44:03.688746   51968 out.go:374] Setting ErrFile to fd 2...
	I1016 18:44:03.688751   51968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:44:03.689289   51968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8816/.minikube/bin
	I1016 18:44:03.690257   51968 out.go:368] Setting JSON to false
	I1016 18:44:03.691231   51968 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5182,"bootTime":1760635062,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:44:03.691312   51968 start.go:141] virtualization: kvm guest
	I1016 18:44:03.779592   51968 out.go:179] * [cert-expiration-854144] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:44:03.874926   51968 notify.go:220] Checking for updates...
	I1016 18:44:03.915088   51968 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:44:03.984101   51968 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:44:03.998559   51968 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8816/kubeconfig
	I1016 18:44:04.060980   51968 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8816/.minikube
	I1016 18:44:04.071973   51968 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:44:04.091895   51968 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:44:04.093766   51968 config.go:182] Loaded profile config "cert-expiration-854144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:44:04.094481   51968 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:44:04.094539   51968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:44:04.111494   51968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35591
	I1016 18:44:04.112104   51968 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:44:04.112725   51968 main.go:141] libmachine: Using API Version  1
	I1016 18:44:04.112740   51968 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:44:04.113109   51968 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:44:04.113335   51968 main.go:141] libmachine: (cert-expiration-854144) Calling .DriverName
	I1016 18:44:04.113674   51968 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:44:04.114111   51968 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:44:04.114172   51968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:44:04.127969   51968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35239
	I1016 18:44:04.128489   51968 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:44:04.129022   51968 main.go:141] libmachine: Using API Version  1
	I1016 18:44:04.129058   51968 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:44:04.129561   51968 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:44:04.129833   51968 main.go:141] libmachine: (cert-expiration-854144) Calling .DriverName
	I1016 18:44:04.167143   51968 out.go:179] * Using the kvm2 driver based on existing profile
	I1016 18:44:04.168282   51968 start.go:305] selected driver: kvm2
	I1016 18:44:04.168289   51968 start.go:925] validating driver "kvm2" against &{Name:cert-expiration-854144 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.34.1 ClusterName:cert-expiration-854144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.148 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:44:04.168382   51968 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:44:04.169248   51968 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:44:04.169340   51968 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21738-8816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1016 18:44:04.184132   51968 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1016 18:44:04.184171   51968 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21738-8816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1016 18:44:04.198562   51968 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1016 18:44:04.198962   51968 cni.go:84] Creating CNI manager for ""
	I1016 18:44:04.199015   51968 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1016 18:44:04.199076   51968 start.go:349] cluster config:
	{Name:cert-expiration-854144 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-854144 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.148 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:44:04.199216   51968 iso.go:125] acquiring lock: {Name:mke23fa091b5b2529e94c2fba7379020f81892c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:44:04.201025   51968 out.go:179] * Starting "cert-expiration-854144" primary control-plane node in "cert-expiration-854144" cluster
	I1016 18:43:59.518745   51557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:43:59.549623   51557 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003 for IP: 192.168.39.142
	I1016 18:43:59.549648   51557 certs.go:195] generating shared ca certs ...
	I1016 18:43:59.549667   51557 certs.go:227] acquiring lock for ca certs: {Name:mkad193a0fb33fec0ea18d9a56f494b9b8779adb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:43:59.549845   51557 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8816/.minikube/ca.key
	I1016 18:43:59.549903   51557 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8816/.minikube/proxy-client-ca.key
	I1016 18:43:59.549916   51557 certs.go:257] generating profile certs ...
	I1016 18:43:59.550031   51557 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/client.key
	I1016 18:43:59.550136   51557 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/apiserver.key.16c57672
	I1016 18:43:59.550205   51557 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/proxy-client.key
	I1016 18:43:59.550341   51557 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/12767.pem (1338 bytes)
	W1016 18:43:59.550377   51557 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-8816/.minikube/certs/12767_empty.pem, impossibly tiny 0 bytes
	I1016 18:43:59.550386   51557 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 18:43:59.550412   51557 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/ca.pem (1078 bytes)
	I1016 18:43:59.550439   51557 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:43:59.550466   51557 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/certs/key.pem (1675 bytes)
	I1016 18:43:59.550521   51557 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8816/.minikube/files/etc/ssl/certs/127672.pem (1708 bytes)
	I1016 18:43:59.551407   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:43:59.587830   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 18:43:59.624839   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:43:59.669182   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 18:43:59.717292   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1016 18:43:59.782224   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1016 18:43:59.824708   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:43:59.875648   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/pause-050003/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1016 18:43:59.927171   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/files/etc/ssl/certs/127672.pem --> /usr/share/ca-certificates/127672.pem (1708 bytes)
	I1016 18:43:59.981017   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:44:00.048221   51557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8816/.minikube/certs/12767.pem --> /usr/share/ca-certificates/12767.pem (1338 bytes)
	I1016 18:44:00.111635   51557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:44:00.152149   51557 ssh_runner.go:195] Run: openssl version
	I1016 18:44:00.160745   51557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/127672.pem && ln -fs /usr/share/ca-certificates/127672.pem /etc/ssl/certs/127672.pem"
	I1016 18:44:00.182087   51557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127672.pem
	I1016 18:44:00.190340   51557 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:53 /usr/share/ca-certificates/127672.pem
	I1016 18:44:00.190411   51557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127672.pem
	I1016 18:44:00.200034   51557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/127672.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:44:00.219488   51557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:44:00.235039   51557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:44:00.243731   51557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:44:00.243803   51557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:44:00.253561   51557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:44:00.273377   51557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12767.pem && ln -fs /usr/share/ca-certificates/12767.pem /etc/ssl/certs/12767.pem"
	I1016 18:44:00.291619   51557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12767.pem
	I1016 18:44:00.298352   51557 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:53 /usr/share/ca-certificates/12767.pem
	I1016 18:44:00.298443   51557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12767.pem
	I1016 18:44:00.308045   51557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12767.pem /etc/ssl/certs/51391683.0"
	I1016 18:44:00.325585   51557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:44:00.338672   51557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:44:00.364677   51557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:44:00.385675   51557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:44:00.400524   51557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:44:00.416528   51557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:44:00.430404   51557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:44:00.439360   51557 kubeadm.go:400] StartCluster: {Name:pause-050003 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-050003 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:44:00.439523   51557 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:44:00.439649   51557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:44:00.487551   51557 cri.go:89] found id: "f9fcc7fcff99c9c9feff79f0d00665cef6acfe8dcb7ca44d012a9367a1f5cb68"
	I1016 18:44:00.487580   51557 cri.go:89] found id: "ae46e480b4faf6eb9510232952087132b353b9d324bce2fee0274bc2b1607112"
	I1016 18:44:00.487587   51557 cri.go:89] found id: "07b011184f53e19705ba685e0bdc155d5dd61f83a23a933e33833c94189a862c"
	I1016 18:44:00.487592   51557 cri.go:89] found id: "138900f082cb91cf15dec293a7219ac9ef1d0562c26a936b116159079c13521b"
	I1016 18:44:00.487596   51557 cri.go:89] found id: "d14ab5accd20dd228f5abd44ad8759e00e8f52bfe7cbbf70e751c88e8a19168a"
	I1016 18:44:00.487612   51557 cri.go:89] found id: "af07ef62c7d7cb7433c0d0a5080931686ac5ff7997a17a4f3faa1c86d84f1fe3"
	I1016 18:44:00.487617   51557 cri.go:89] found id: "d37befefb8cb45666ac08e050ad83458852f70cdc44daac678980be7982ed88e"
	I1016 18:44:00.487620   51557 cri.go:89] found id: "1b2f1e517947eacc22d2130f03c0c2330bf0d0ce34d5994124fc25a5cd9442fc"
	I1016 18:44:00.487624   51557 cri.go:89] found id: "8826fedd55538ba5a818b005995a769c41fd97532ab28f367823d54da6f1ce1a"
	I1016 18:44:00.487632   51557 cri.go:89] found id: "e114ce1dfa21b0e5df66c962c5a4f47663c6d49a358bbd9c96a5c4cb97d28bc3"
	I1016 18:44:00.487636   51557 cri.go:89] found id: "a47769b140ecbbfc798478b8f5f1349bbc5c366196938972d84a79f08bd90d47"
	I1016 18:44:00.487640   51557 cri.go:89] found id: "93a48ed4131a6a79c5441907006747d3ef82a04f382eb26fb6c08e2001f9f2f4"
	I1016 18:44:00.487644   51557 cri.go:89] found id: ""
	I1016 18:44:00.487695   51557 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-050003 -n pause-050003
helpers_test.go:269: (dbg) Run:  kubectl --context pause-050003 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (64.96s)

                                                
                                    

Test pass (281/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 23.12
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 12.5
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.45
18 TestDownloadOnly/v1.34.1/DeleteAll 0.31
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.35
21 TestBinaryMirror 1.49
22 TestOffline 90.67
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 205.85
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 11.5
35 TestAddons/parallel/Registry 18.18
36 TestAddons/parallel/RegistryCreds 1.31
38 TestAddons/parallel/InspektorGadget 6.38
39 TestAddons/parallel/MetricsServer 7.41
41 TestAddons/parallel/CSI 43.39
42 TestAddons/parallel/Headlamp 28.81
43 TestAddons/parallel/CloudSpanner 6.95
44 TestAddons/parallel/LocalPath 57.74
45 TestAddons/parallel/NvidiaDevicePlugin 6.76
46 TestAddons/parallel/Yakd 12.19
48 TestAddons/StoppedEnableDisable 81.71
49 TestCertOptions 81.88
50 TestCertExpiration 333.96
52 TestForceSystemdFlag 58.37
53 TestForceSystemdEnv 39.46
55 TestKVMDriverInstallOrUpdate 0.83
59 TestErrorSpam/setup 37.91
60 TestErrorSpam/start 0.33
61 TestErrorSpam/status 0.79
62 TestErrorSpam/pause 1.64
63 TestErrorSpam/unpause 1.83
64 TestErrorSpam/stop 4.68
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 47.71
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 29.82
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.11
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.34
76 TestFunctional/serial/CacheCmd/cache/add_local 2.16
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.69
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 58.83
85 TestFunctional/serial/ComponentHealth 0.06
86 TestFunctional/serial/LogsCmd 1.42
87 TestFunctional/serial/LogsFileCmd 1.42
88 TestFunctional/serial/InvalidService 4.52
90 TestFunctional/parallel/ConfigCmd 0.32
91 TestFunctional/parallel/DashboardCmd 16.48
92 TestFunctional/parallel/DryRun 0.27
93 TestFunctional/parallel/InternationalLanguage 0.14
94 TestFunctional/parallel/StatusCmd 0.84
98 TestFunctional/parallel/ServiceCmdConnect 19.57
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 30.63
102 TestFunctional/parallel/SSHCmd 0.42
103 TestFunctional/parallel/CpCmd 1.3
104 TestFunctional/parallel/MySQL 23.05
105 TestFunctional/parallel/FileSync 0.2
106 TestFunctional/parallel/CertSync 1.27
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
114 TestFunctional/parallel/License 0.35
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
116 TestFunctional/parallel/Version/short 0.05
117 TestFunctional/parallel/Version/components 0.63
118 TestFunctional/parallel/ImageCommands/ImageListShort 1.46
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
122 TestFunctional/parallel/ImageCommands/ImageBuild 3.66
123 TestFunctional/parallel/ImageCommands/Setup 1.98
124 TestFunctional/parallel/ProfileCmd/profile_list 0.38
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
129 TestFunctional/parallel/MountCmd/any-port 21.66
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.61
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.34
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.87
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 7.13
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.96
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
146 TestFunctional/parallel/MountCmd/specific-port 1.89
147 TestFunctional/parallel/ServiceCmd/DeployApp 14.18
148 TestFunctional/parallel/MountCmd/VerifyCleanup 1.32
149 TestFunctional/parallel/ServiceCmd/List 1.26
150 TestFunctional/parallel/ServiceCmd/JSONOutput 1.27
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.31
152 TestFunctional/parallel/ServiceCmd/Format 0.28
153 TestFunctional/parallel/ServiceCmd/URL 0.28
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 198.2
162 TestMultiControlPlane/serial/DeployApp 6.69
163 TestMultiControlPlane/serial/PingHostFromPods 1.16
164 TestMultiControlPlane/serial/AddWorkerNode 44.66
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
167 TestMultiControlPlane/serial/CopyFile 12.91
168 TestMultiControlPlane/serial/StopSecondaryNode 82.91
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
170 TestMultiControlPlane/serial/RestartSecondaryNode 35.95
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.08
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 371.79
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.37
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
175 TestMultiControlPlane/serial/StopCluster 257.83
176 TestMultiControlPlane/serial/RestartCluster 115.28
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
178 TestMultiControlPlane/serial/AddSecondaryNode 74.95
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
183 TestJSONOutput/start/Command 84.34
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.77
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.65
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 6.86
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 80.5
215 TestMountStart/serial/StartWithMountFirst 20.4
216 TestMountStart/serial/VerifyMountFirst 0.38
217 TestMountStart/serial/StartWithMountSecond 21.69
218 TestMountStart/serial/VerifyMountSecond 0.37
219 TestMountStart/serial/DeleteFirst 0.71
220 TestMountStart/serial/VerifyMountPostDelete 0.38
221 TestMountStart/serial/Stop 1.24
222 TestMountStart/serial/RestartStopped 19.34
223 TestMountStart/serial/VerifyMountPostStop 0.38
226 TestMultiNode/serial/FreshStart2Nodes 97.28
227 TestMultiNode/serial/DeployApp2Nodes 6.16
228 TestMultiNode/serial/PingHostFrom2Pods 0.78
229 TestMultiNode/serial/AddNode 41.32
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.59
232 TestMultiNode/serial/CopyFile 7.14
233 TestMultiNode/serial/StopNode 2.38
234 TestMultiNode/serial/StartAfterStop 39.33
235 TestMultiNode/serial/RestartKeepsNodes 302.73
236 TestMultiNode/serial/DeleteNode 2.81
237 TestMultiNode/serial/StopMultiNode 173.67
238 TestMultiNode/serial/RestartMultiNode 94.62
239 TestMultiNode/serial/ValidateNameConflict 39.18
246 TestScheduledStopUnix 108.56
250 TestRunningBinaryUpgrade 110.67
252 TestKubernetesUpgrade 147.95
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 96.86
264 TestNetworkPlugins/group/false 3.29
265 TestNoKubernetes/serial/StartWithStopK8s 32.82
269 TestNoKubernetes/serial/Start 40.22
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
271 TestNoKubernetes/serial/ProfileList 1.42
272 TestNoKubernetes/serial/Stop 1.34
273 TestNoKubernetes/serial/StartNoArgs 40.19
274 TestStoppedBinaryUpgrade/Setup 2.56
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
284 TestPause/serial/Start 70.86
285 TestStoppedBinaryUpgrade/Upgrade 134.51
287 TestNetworkPlugins/group/auto/Start 95.97
288 TestStoppedBinaryUpgrade/MinikubeLogs 1.22
289 TestNetworkPlugins/group/kindnet/Start 62.82
290 TestNetworkPlugins/group/calico/Start 94.83
291 TestNetworkPlugins/group/custom-flannel/Start 96.66
292 TestNetworkPlugins/group/auto/KubeletFlags 0.27
293 TestNetworkPlugins/group/auto/NetCatPod 10.25
294 TestNetworkPlugins/group/auto/DNS 0.19
295 TestNetworkPlugins/group/auto/Localhost 0.19
296 TestNetworkPlugins/group/auto/HairPin 0.16
297 TestNetworkPlugins/group/enable-default-cni/Start 86.58
298 TestNetworkPlugins/group/kindnet/ControllerPod 6.13
299 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
300 TestNetworkPlugins/group/kindnet/NetCatPod 12.33
301 TestNetworkPlugins/group/kindnet/DNS 0.18
302 TestNetworkPlugins/group/kindnet/Localhost 0.16
303 TestNetworkPlugins/group/kindnet/HairPin 0.16
304 TestNetworkPlugins/group/calico/ControllerPod 6.01
305 TestNetworkPlugins/group/flannel/Start 74.94
306 TestNetworkPlugins/group/calico/KubeletFlags 0.24
307 TestNetworkPlugins/group/calico/NetCatPod 10.34
308 TestNetworkPlugins/group/calico/DNS 0.16
309 TestNetworkPlugins/group/calico/Localhost 0.16
310 TestNetworkPlugins/group/calico/HairPin 0.16
311 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
312 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.3
313 TestNetworkPlugins/group/custom-flannel/DNS 0.17
314 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
315 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
316 TestNetworkPlugins/group/bridge/Start 64.84
317 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
318 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.3
320 TestStartStop/group/old-k8s-version/serial/FirstStart 65.9
321 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
322 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
323 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
324 TestNetworkPlugins/group/flannel/ControllerPod 6.01
326 TestStartStop/group/no-preload/serial/FirstStart 78.81
327 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
328 TestNetworkPlugins/group/flannel/NetCatPod 15.32
329 TestNetworkPlugins/group/flannel/DNS 0.19
330 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
331 TestNetworkPlugins/group/flannel/Localhost 0.17
332 TestNetworkPlugins/group/flannel/HairPin 0.19
333 TestNetworkPlugins/group/bridge/NetCatPod 10.31
334 TestNetworkPlugins/group/bridge/DNS 0.24
335 TestNetworkPlugins/group/bridge/Localhost 0.14
336 TestNetworkPlugins/group/bridge/HairPin 0.17
338 TestStartStop/group/embed-certs/serial/FirstStart 60.21
339 TestStartStop/group/old-k8s-version/serial/DeployApp 11.38
341 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 92.18
342 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.29
343 TestStartStop/group/old-k8s-version/serial/Stop 79.43
344 TestStartStop/group/no-preload/serial/DeployApp 12.31
345 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
346 TestStartStop/group/no-preload/serial/Stop 78.94
347 TestStartStop/group/embed-certs/serial/DeployApp 11.27
348 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
349 TestStartStop/group/embed-certs/serial/Stop 80.94
350 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
351 TestStartStop/group/old-k8s-version/serial/SecondStart 46.2
352 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.28
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.46
354 TestStartStop/group/default-k8s-diff-port/serial/Stop 84.23
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
356 TestStartStop/group/no-preload/serial/SecondStart 60.25
357 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 15.01
358 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
359 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
360 TestStartStop/group/embed-certs/serial/SecondStart 45.13
361 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
362 TestStartStop/group/old-k8s-version/serial/Pause 3.24
364 TestStartStop/group/newest-cni/serial/FirstStart 53.15
365 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.18
366 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 15.01
367 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
368 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 41.43
369 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
370 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
371 TestStartStop/group/no-preload/serial/Pause 3.31
372 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.08
373 TestStartStop/group/newest-cni/serial/DeployApp 0
374 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.17
375 TestStartStop/group/newest-cni/serial/Stop 13.85
376 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
377 TestStartStop/group/embed-certs/serial/Pause 3.06
378 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
379 TestStartStop/group/newest-cni/serial/SecondStart 35.96
380 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 15.01
381 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
382 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
383 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.89
384 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
385 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
387 TestStartStop/group/newest-cni/serial/Pause 3.95
x
+
TestDownloadOnly/v1.28.0/json-events (23.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-047197 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-047197 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (23.118906635s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (23.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1016 17:44:11.590372   12767 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1016 17:44:11.590479   12767 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-047197
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-047197: exit status 85 (62.956785ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-047197 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-047197 │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 17:43:48
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 17:43:48.511440   12779 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:43:48.511715   12779 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:43:48.511726   12779 out.go:374] Setting ErrFile to fd 2...
	I1016 17:43:48.511730   12779 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:43:48.511909   12779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8816/.minikube/bin
	W1016 17:43:48.512029   12779 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21738-8816/.minikube/config/config.json: open /home/jenkins/minikube-integration/21738-8816/.minikube/config/config.json: no such file or directory
	I1016 17:43:48.512503   12779 out.go:368] Setting JSON to true
	I1016 17:43:48.513411   12779 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1566,"bootTime":1760635062,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 17:43:48.513485   12779 start.go:141] virtualization: kvm guest
	I1016 17:43:48.515573   12779 out.go:99] [download-only-047197] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 17:43:48.515715   12779 notify.go:220] Checking for updates...
	W1016 17:43:48.515747   12779 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21738-8816/.minikube/cache/preloaded-tarball: no such file or directory
	I1016 17:43:48.517003   12779 out.go:171] MINIKUBE_LOCATION=21738
	I1016 17:43:48.518474   12779 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 17:43:48.519838   12779 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21738-8816/kubeconfig
	I1016 17:43:48.521035   12779 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8816/.minikube
	I1016 17:43:48.522282   12779 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1016 17:43:48.524266   12779 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1016 17:43:48.524497   12779 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 17:43:49.009639   12779 out.go:99] Using the kvm2 driver based on user configuration
	I1016 17:43:49.009674   12779 start.go:305] selected driver: kvm2
	I1016 17:43:49.009680   12779 start.go:925] validating driver "kvm2" against <nil>
	I1016 17:43:49.009982   12779 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 17:43:49.010103   12779 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21738-8816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1016 17:43:49.025061   12779 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1016 17:43:49.025096   12779 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21738-8816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1016 17:43:49.038187   12779 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1016 17:43:49.038233   12779 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 17:43:49.038784   12779 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1016 17:43:49.038976   12779 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1016 17:43:49.039002   12779 cni.go:84] Creating CNI manager for ""
	I1016 17:43:49.039045   12779 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1016 17:43:49.039053   12779 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1016 17:43:49.039093   12779 start.go:349] cluster config:
	{Name:download-only-047197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-047197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 17:43:49.039303   12779 iso.go:125] acquiring lock: {Name:mke23fa091b5b2529e94c2fba7379020f81892c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 17:43:49.041053   12779 out.go:99] Downloading VM boot image ...
	I1016 17:43:49.041079   12779 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21738-8816/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1016 17:43:58.998819   12779 out.go:99] Starting "download-only-047197" primary control-plane node in "download-only-047197" cluster
	I1016 17:43:58.998840   12779 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1016 17:43:59.089018   12779 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1016 17:43:59.089057   12779 cache.go:58] Caching tarball of preloaded images
	I1016 17:43:59.089262   12779 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1016 17:43:59.090921   12779 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1016 17:43:59.090951   12779 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1016 17:43:59.190134   12779 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1016 17:43:59.190283   12779 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21738-8816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-047197 host does not exist
	  To start a cluster, run: "minikube start -p download-only-047197"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-047197
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (12.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-762056 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-762056 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (12.500557608s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (12.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1016 17:44:24.441716   12767 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1016 17:44:24.441773   12767 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-762056
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-762056: exit status 85 (450.143322ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-047197 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-047197 │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 16 Oct 25 17:44 UTC │ 16 Oct 25 17:44 UTC │
	│ delete  │ -p download-only-047197                                                                                                                                                                             │ download-only-047197 │ jenkins │ v1.37.0 │ 16 Oct 25 17:44 UTC │ 16 Oct 25 17:44 UTC │
	│ start   │ -o=json --download-only -p download-only-762056 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-762056 │ jenkins │ v1.37.0 │ 16 Oct 25 17:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 17:44:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 17:44:11.982354   13045 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:44:11.982622   13045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:44:11.982633   13045 out.go:374] Setting ErrFile to fd 2...
	I1016 17:44:11.982638   13045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:44:11.982859   13045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8816/.minikube/bin
	I1016 17:44:11.983346   13045 out.go:368] Setting JSON to true
	I1016 17:44:11.984272   13045 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1590,"bootTime":1760635062,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 17:44:11.984361   13045 start.go:141] virtualization: kvm guest
	I1016 17:44:11.986346   13045 out.go:99] [download-only-762056] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 17:44:11.986491   13045 notify.go:220] Checking for updates...
	I1016 17:44:11.987651   13045 out.go:171] MINIKUBE_LOCATION=21738
	I1016 17:44:11.988884   13045 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 17:44:11.990233   13045 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21738-8816/kubeconfig
	I1016 17:44:11.991436   13045 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8816/.minikube
	I1016 17:44:11.992637   13045 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1016 17:44:11.994638   13045 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1016 17:44:11.994875   13045 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 17:44:12.025844   13045 out.go:99] Using the kvm2 driver based on user configuration
	I1016 17:44:12.025881   13045 start.go:305] selected driver: kvm2
	I1016 17:44:12.025888   13045 start.go:925] validating driver "kvm2" against <nil>
	I1016 17:44:12.026244   13045 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 17:44:12.026359   13045 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21738-8816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1016 17:44:12.040339   13045 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1016 17:44:12.040371   13045 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21738-8816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1016 17:44:12.054422   13045 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1016 17:44:12.054469   13045 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 17:44:12.055020   13045 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1016 17:44:12.055222   13045 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1016 17:44:12.055252   13045 cni.go:84] Creating CNI manager for ""
	I1016 17:44:12.055313   13045 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1016 17:44:12.055326   13045 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1016 17:44:12.055378   13045 start.go:349] cluster config:
	{Name:download-only-762056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-762056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 17:44:12.055494   13045 iso.go:125] acquiring lock: {Name:mke23fa091b5b2529e94c2fba7379020f81892c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 17:44:12.057329   13045 out.go:99] Starting "download-only-762056" primary control-plane node in "download-only-762056" cluster
	I1016 17:44:12.057349   13045 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 17:44:12.181683   13045 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1016 17:44:12.181734   13045 cache.go:58] Caching tarball of preloaded images
	I1016 17:44:12.181901   13045 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 17:44:12.183589   13045 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1016 17:44:12.183613   13045 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1016 17:44:12.280901   13045 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1016 17:44:12.280981   13045 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21738-8816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-762056 host does not exist
	  To start a cluster, run: "minikube start -p download-only-762056"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-762056
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.35s)

                                                
                                    
x
+
TestBinaryMirror (1.49s)

                                                
                                                
=== RUN   TestBinaryMirror
I1016 17:44:26.255526   12767 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-778089 --alsologtostderr --binary-mirror http://127.0.0.1:37567 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:309: (dbg) Done: out/minikube-linux-amd64 start --download-only -p binary-mirror-778089 --alsologtostderr --binary-mirror http://127.0.0.1:37567 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1.181456324s)
helpers_test.go:175: Cleaning up "binary-mirror-778089" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-778089
--- PASS: TestBinaryMirror (1.49s)

                                                
                                    
x
+
TestOffline (90.67s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-470108 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-470108 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m29.326574101s)
helpers_test.go:175: Cleaning up "offline-crio-470108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-470108
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-470108: (1.347795578s)
--- PASS: TestOffline (90.67s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-019580
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-019580: exit status 85 (57.34714ms)

                                                
                                                
-- stdout --
	* Profile "addons-019580" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-019580"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-019580
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-019580: exit status 85 (55.946468ms)

                                                
                                                
-- stdout --
	* Profile "addons-019580" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-019580"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (205.85s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-019580 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-019580 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m25.847686543s)
--- PASS: TestAddons/Setup (205.85s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-019580 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-019580 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.5s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-019580 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-019580 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d04de663-3415-49fc-8de0-6c2bcb2781c1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d04de663-3415-49fc-8de0-6c2bcb2781c1] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.003898574s
addons_test.go:694: (dbg) Run:  kubectl --context addons-019580 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-019580 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-019580 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.50s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.40252ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-dzfmg" [8d164814-7fd0-4b56-a4ed-12771b631303] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004004586s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-f58hj" [d0a773e3-ad59-4a57-89ed-1b4a3eb52390] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003794594s
addons_test.go:392: (dbg) Run:  kubectl --context addons-019580 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-019580 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-019580 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.092148988s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-019580 ip
2025/10/16 17:48:31 [DEBUG] GET http://192.168.39.210:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019580 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.18s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (1.31s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 6.19938ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-019580
addons_test.go:332: (dbg) Run:  kubectl --context addons-019580 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019580 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-019580 addons disable registry-creds --alsologtostderr -v=1: (1.095678909s)
--- PASS: TestAddons/parallel/RegistryCreds (1.31s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.38s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-q89mj" [d5744fd9-0134-43c6-8f05-f31e6045d33a] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004441955s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019580 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.38s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.41s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 9.213308ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-n7m6f" [80bf5f5a-64bc-418a-84ab-e35b334f4a34] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.094525366s
addons_test.go:463: (dbg) Run:  kubectl --context addons-019580 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019580 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-019580 addons disable metrics-server --alsologtostderr -v=1: (1.233772255s)
--- PASS: TestAddons/parallel/MetricsServer (7.41s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1016 17:48:26.221235   12767 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1016 17:48:26.229427   12767 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1016 17:48:26.229458   12767 kapi.go:107] duration metric: took 8.236813ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 8.247941ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-019580 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019580 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019580 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-019580 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [d70e5e5a-6570-4f04-a119-2d7a8a244b13] Pending
helpers_test.go:352: "task-pv-pod" [d70e5e5a-6570-4f04-a119-2d7a8a244b13] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [d70e5e5a-6570-4f04-a119-2d7a8a244b13] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004898979s
addons_test.go:572: (dbg) Run:  kubectl --context addons-019580 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-019580 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-019580 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-019580 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-019580 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-019580 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019580 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019580 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019580 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019580 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019580 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019580 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019580 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019580 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-019580 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [b74cce04-01d7-4acf-90aa-2469ed8d24f4] Pending
helpers_test.go:352: "task-pv-pod-restore" [b74cce04-01d7-4acf-90aa-2469ed8d24f4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [b74cce04-01d7-4acf-90aa-2469ed8d24f4] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003940333s
addons_test.go:614: (dbg) Run:  kubectl --context addons-019580 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-019580 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-019580 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019580 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019580 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-019580 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.916326557s)
--- PASS: TestAddons/parallel/CSI (43.39s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (28.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-019580 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-57s4p" [1b36926f-4fd4-494d-b565-cd4c1dea58ca] Pending
helpers_test.go:352: "headlamp-6945c6f4d-57s4p" [1b36926f-4fd4-494d-b565-cd4c1dea58ca] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-57s4p" [1b36926f-4fd4-494d-b565-cd4c1dea58ca] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 22.003030052s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019580 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-019580 addons disable headlamp --alsologtostderr -v=1: (5.847358836s)
--- PASS: TestAddons/parallel/Headlamp (28.81s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-bn8zv" [5ff261d6-0020-4764-bcee-ea9fd7a2f4fb] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.09541966s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019580 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.95s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.74s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-019580 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-019580 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019580 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019580 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019580 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019580 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019580 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019580 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019580 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019580 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019580 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019580 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [59c42bc9-35db-4bb5-b28f-54d485aa0cb6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [59c42bc9-35db-4bb5-b28f-54d485aa0cb6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [59c42bc9-35db-4bb5-b28f-54d485aa0cb6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003133849s
addons_test.go:967: (dbg) Run:  kubectl --context addons-019580 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-019580 ssh "cat /opt/local-path-provisioner/pvc-9fdcc576-35c5-4162-a0f5-167380d6b2ab_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-019580 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-019580 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019580 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-019580 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.944384218s)
--- PASS: TestAddons/parallel/LocalPath (57.74s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-b4mml" [822e9daf-f119-4305-a6e2-316dc27de6e8] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004301794s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019580 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.76s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-llr6p" [67a1ae13-8d66-4a33-b37e-d67ecc5c1578] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004448609s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019580 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-019580 addons disable yakd --alsologtostderr -v=1: (6.188492885s)
--- PASS: TestAddons/parallel/Yakd (12.19s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (81.71s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-019580
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-019580: (1m21.443320486s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-019580
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-019580
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-019580
--- PASS: TestAddons/StoppedEnableDisable (81.71s)

                                                
                                    
x
+
TestCertOptions (81.88s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-605457 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-605457 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m20.553793561s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-605457 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-605457 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-605457 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-605457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-605457
--- PASS: TestCertOptions (81.88s)

                                                
                                    
x
+
TestCertExpiration (333.96s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-854144 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-854144 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m19.813556886s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-854144 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-854144 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m13.082794266s)
helpers_test.go:175: Cleaning up "cert-expiration-854144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-854144
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-854144: (1.065637152s)
--- PASS: TestCertExpiration (333.96s)

                                                
                                    
x
+
TestForceSystemdFlag (58.37s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-782579 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-782579 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (57.12885649s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-782579 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-782579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-782579
--- PASS: TestForceSystemdFlag (58.37s)

                                                
                                    
x
+
TestForceSystemdEnv (39.46s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-527495 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-527495 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.527290539s)
helpers_test.go:175: Cleaning up "force-systemd-env-527495" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-527495
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-527495: (1.935660204s)
--- PASS: TestForceSystemdEnv (39.46s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.83s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1016 18:40:45.625732   12767 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1016 18:40:45.625916   12767 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate30318574/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1016 18:40:45.660837   12767 install.go:163] /tmp/TestKVMDriverInstallOrUpdate30318574/001/docker-machine-driver-kvm2 version is 1.1.1
W1016 18:40:45.660882   12767 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1016 18:40:45.661043   12767 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1016 18:40:45.661093   12767 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate30318574/001/docker-machine-driver-kvm2
I1016 18:40:46.310061   12767 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate30318574/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1016 18:40:46.325447   12767 install.go:163] /tmp/TestKVMDriverInstallOrUpdate30318574/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.83s)

                                                
                                    
x
+
TestErrorSpam/setup (37.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-571474 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-571474 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1016 17:52:54.283338   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:52:54.292605   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:52:54.305336   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:52:54.326786   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:52:54.368311   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:52:54.449728   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:52:54.611265   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:52:54.932960   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:52:55.575114   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:52:56.856800   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-571474 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-571474 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.905786147s)
--- PASS: TestErrorSpam/setup (37.91s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-571474 --log_dir /tmp/nospam-571474 start --dry-run
E1016 17:52:59.418499   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-571474 --log_dir /tmp/nospam-571474 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-571474 --log_dir /tmp/nospam-571474 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-571474 --log_dir /tmp/nospam-571474 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-571474 --log_dir /tmp/nospam-571474 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-571474 --log_dir /tmp/nospam-571474 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-571474 --log_dir /tmp/nospam-571474 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-571474 --log_dir /tmp/nospam-571474 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-571474 --log_dir /tmp/nospam-571474 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-571474 --log_dir /tmp/nospam-571474 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-571474 --log_dir /tmp/nospam-571474 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-571474 --log_dir /tmp/nospam-571474 unpause
--- PASS: TestErrorSpam/unpause (1.83s)

                                                
                                    
x
+
TestErrorSpam/stop (4.68s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-571474 --log_dir /tmp/nospam-571474 stop
E1016 17:53:04.540258   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-571474 --log_dir /tmp/nospam-571474 stop: (1.946194217s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-571474 --log_dir /tmp/nospam-571474 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-571474 --log_dir /tmp/nospam-571474 stop: (1.345236432s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-571474 --log_dir /tmp/nospam-571474 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-571474 --log_dir /tmp/nospam-571474 stop: (1.389524453s)
--- PASS: TestErrorSpam/stop (4.68s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21738-8816/.minikube/files/etc/test/nested/copy/12767/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.71s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-032307 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1016 17:53:14.781990   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:53:35.263627   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-032307 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (47.714376173s)
--- PASS: TestFunctional/serial/StartWithProxy (47.71s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.82s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1016 17:53:56.747227   12767 config.go:182] Loaded profile config "functional-032307": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-032307 --alsologtostderr -v=8
E1016 17:54:16.225671   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-032307 --alsologtostderr -v=8: (29.814507976s)
functional_test.go:678: soft start took 29.815091513s for "functional-032307" cluster.
I1016 17:54:26.562097   12767 config.go:182] Loaded profile config "functional-032307": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (29.82s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-032307 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-032307 cache add registry.k8s.io/pause:3.1: (1.062746095s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-032307 cache add registry.k8s.io/pause:3.3: (1.15443791s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-032307 cache add registry.k8s.io/pause:latest: (1.120581885s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-032307 /tmp/TestFunctionalserialCacheCmdcacheadd_local3279403698/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 cache add minikube-local-cache-test:functional-032307
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-032307 cache add minikube-local-cache-test:functional-032307: (1.815496554s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 cache delete minikube-local-cache-test:functional-032307
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-032307
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032307 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (222.326277ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 kubectl -- --context functional-032307 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-032307 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (58.83s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-032307 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-032307 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (58.831582548s)
functional_test.go:776: restart took 58.831716646s for "functional-032307" cluster.
I1016 17:55:33.364482   12767 config.go:182] Loaded profile config "functional-032307": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (58.83s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-032307 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-032307 logs: (1.420431109s)
--- PASS: TestFunctional/serial/LogsCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 logs --file /tmp/TestFunctionalserialLogsFileCmd685984443/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-032307 logs --file /tmp/TestFunctionalserialLogsFileCmd685984443/001/logs.txt: (1.413468039s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.52s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-032307 apply -f testdata/invalidsvc.yaml
E1016 17:55:38.148303   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-032307
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-032307: exit status 115 (285.314005ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.31:31793 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-032307 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-032307 delete -f testdata/invalidsvc.yaml: (1.050391061s)
--- PASS: TestFunctional/serial/InvalidService (4.52s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032307 config get cpus: exit status 14 (51.989126ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032307 config get cpus: exit status 14 (52.532213ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-032307 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-032307 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 21143: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.48s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-032307 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-032307 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (137.164475ms)

                                                
                                                
-- stdout --
	* [functional-032307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21738-8816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 17:55:42.596276   19899 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:55:42.596576   19899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:55:42.596589   19899 out.go:374] Setting ErrFile to fd 2...
	I1016 17:55:42.596596   19899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:55:42.596869   19899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8816/.minikube/bin
	I1016 17:55:42.597465   19899 out.go:368] Setting JSON to false
	I1016 17:55:42.598722   19899 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2281,"bootTime":1760635062,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 17:55:42.598836   19899 start.go:141] virtualization: kvm guest
	I1016 17:55:42.601264   19899 out.go:179] * [functional-032307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 17:55:42.602609   19899 notify.go:220] Checking for updates...
	I1016 17:55:42.602668   19899 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 17:55:42.603952   19899 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 17:55:42.605629   19899 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8816/kubeconfig
	I1016 17:55:42.606970   19899 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8816/.minikube
	I1016 17:55:42.608289   19899 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 17:55:42.609458   19899 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 17:55:42.611095   19899 config.go:182] Loaded profile config "functional-032307": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:55:42.611716   19899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:55:42.611814   19899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:55:42.625818   19899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35113
	I1016 17:55:42.626403   19899 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:55:42.627036   19899 main.go:141] libmachine: Using API Version  1
	I1016 17:55:42.627060   19899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:55:42.627409   19899 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:55:42.627576   19899 main.go:141] libmachine: (functional-032307) Calling .DriverName
	I1016 17:55:42.627845   19899 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 17:55:42.628266   19899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:55:42.628312   19899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:55:42.642054   19899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36305
	I1016 17:55:42.642456   19899 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:55:42.642826   19899 main.go:141] libmachine: Using API Version  1
	I1016 17:55:42.642848   19899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:55:42.643236   19899 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:55:42.643419   19899 main.go:141] libmachine: (functional-032307) Calling .DriverName
	I1016 17:55:42.674359   19899 out.go:179] * Using the kvm2 driver based on existing profile
	I1016 17:55:42.675601   19899 start.go:305] selected driver: kvm2
	I1016 17:55:42.675612   19899 start.go:925] validating driver "kvm2" against &{Name:functional-032307 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-032307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 17:55:42.675707   19899 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 17:55:42.677680   19899 out.go:203] 
	W1016 17:55:42.678939   19899 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1016 17:55:42.680001   19899 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-032307 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-032307 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-032307 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (137.182666ms)

                                                
                                                
-- stdout --
	* [functional-032307] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21738-8816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 17:55:44.100570   20287 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:55:44.100676   20287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:55:44.100680   20287 out.go:374] Setting ErrFile to fd 2...
	I1016 17:55:44.100684   20287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:55:44.100969   20287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8816/.minikube/bin
	I1016 17:55:44.101419   20287 out.go:368] Setting JSON to false
	I1016 17:55:44.102366   20287 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2282,"bootTime":1760635062,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 17:55:44.102451   20287 start.go:141] virtualization: kvm guest
	I1016 17:55:44.104221   20287 out.go:179] * [functional-032307] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1016 17:55:44.105546   20287 notify.go:220] Checking for updates...
	I1016 17:55:44.105559   20287 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 17:55:44.106767   20287 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 17:55:44.108071   20287 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8816/kubeconfig
	I1016 17:55:44.109204   20287 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8816/.minikube
	I1016 17:55:44.110242   20287 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 17:55:44.111177   20287 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 17:55:44.112856   20287 config.go:182] Loaded profile config "functional-032307": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:55:44.113452   20287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:55:44.113532   20287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:55:44.128517   20287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43891
	I1016 17:55:44.129063   20287 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:55:44.129685   20287 main.go:141] libmachine: Using API Version  1
	I1016 17:55:44.129706   20287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:55:44.130071   20287 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:55:44.130246   20287 main.go:141] libmachine: (functional-032307) Calling .DriverName
	I1016 17:55:44.130490   20287 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 17:55:44.130942   20287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 17:55:44.130988   20287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 17:55:44.144436   20287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38653
	I1016 17:55:44.144945   20287 main.go:141] libmachine: () Calling .GetVersion
	I1016 17:55:44.145468   20287 main.go:141] libmachine: Using API Version  1
	I1016 17:55:44.145490   20287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 17:55:44.145845   20287 main.go:141] libmachine: () Calling .GetMachineName
	I1016 17:55:44.146015   20287 main.go:141] libmachine: (functional-032307) Calling .DriverName
	I1016 17:55:44.176183   20287 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1016 17:55:44.177426   20287 start.go:305] selected driver: kvm2
	I1016 17:55:44.177441   20287 start.go:925] validating driver "kvm2" against &{Name:functional-032307 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-032307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 17:55:44.177580   20287 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 17:55:44.180027   20287 out.go:203] 
	W1016 17:55:44.181173   20287 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1016 17:55:44.182504   20287 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (19.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-032307 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-032307 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-zk59m" [9b9cf276-303c-46d2-83b4-199f2e0ae4b3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-zk59m" [9b9cf276-303c-46d2-83b4-199f2e0ae4b3] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 19.00340491s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.31:31766
functional_test.go:1680: http://192.168.39.31:31766: success! body:
Request served by hello-node-connect-7d85dfc575-zk59m

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.31:31766
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (19.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [706be2bf-77f1-440d-89f4-60ac4ed6e5da] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003341855s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-032307 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-032307 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-032307 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-032307 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b429b0f1-67e9-4107-a5be-2bfe0609d0b9] Pending
helpers_test.go:352: "sp-pod" [b429b0f1-67e9-4107-a5be-2bfe0609d0b9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [b429b0f1-67e9-4107-a5be-2bfe0609d0b9] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.007960191s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-032307 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-032307 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-032307 delete -f testdata/storage-provisioner/pod.yaml: (1.70895274s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-032307 apply -f testdata/storage-provisioner/pod.yaml
I1016 17:56:18.731761   12767 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8d5b5bd1-8c1e-4e88-b8b9-b5fdd89c0262] Pending
helpers_test.go:352: "sp-pod" [8d5b5bd1-8c1e-4e88-b8b9-b5fdd89c0262] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2025/10/16 17:56:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "sp-pod" [8d5b5bd1-8c1e-4e88-b8b9-b5fdd89c0262] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003741096s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-032307 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.63s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh -n functional-032307 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 cp functional-032307:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2683799494/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh -n functional-032307 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh -n functional-032307 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-032307 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-5cc6f" [1877a6f9-792c-40e2-801a-de7f41c2aa1c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-5cc6f" [1877a6f9-792c-40e2-801a-de7f41c2aa1c] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.005295849s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-032307 exec mysql-5bb876957f-5cc6f -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-032307 exec mysql-5bb876957f-5cc6f -- mysql -ppassword -e "show databases;": exit status 1 (136.921418ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1016 17:56:02.074527   12767 retry.go:31] will retry after 1.396074057s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-032307 exec mysql-5bb876957f-5cc6f -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-032307 exec mysql-5bb876957f-5cc6f -- mysql -ppassword -e "show databases;": exit status 1 (176.777628ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1016 17:56:03.647641   12767 retry.go:31] will retry after 975.856688ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-032307 exec mysql-5bb876957f-5cc6f -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/12767/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "sudo cat /etc/test/nested/copy/12767/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/12767.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "sudo cat /etc/ssl/certs/12767.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/12767.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "sudo cat /usr/share/ca-certificates/12767.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/127672.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "sudo cat /etc/ssl/certs/127672.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/127672.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "sudo cat /usr/share/ca-certificates/127672.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-032307 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032307 ssh "sudo systemctl is-active docker": exit status 1 (217.493387ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032307 ssh "sudo systemctl is-active containerd": exit status 1 (217.92003ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-032307 image ls --format short --alsologtostderr: (1.455834467s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-032307 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-032307
localhost/kicbase/echo-server:functional-032307
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-032307 image ls --format short --alsologtostderr:
I1016 17:56:08.684525   21514 out.go:360] Setting OutFile to fd 1 ...
I1016 17:56:08.684954   21514 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 17:56:08.684969   21514 out.go:374] Setting ErrFile to fd 2...
I1016 17:56:08.684976   21514 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 17:56:08.685469   21514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8816/.minikube/bin
I1016 17:56:08.686460   21514 config.go:182] Loaded profile config "functional-032307": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 17:56:08.686603   21514 config.go:182] Loaded profile config "functional-032307": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 17:56:08.687017   21514 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1016 17:56:08.687068   21514 main.go:141] libmachine: Launching plugin server for driver kvm2
I1016 17:56:08.701369   21514 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38213
I1016 17:56:08.702083   21514 main.go:141] libmachine: () Calling .GetVersion
I1016 17:56:08.702701   21514 main.go:141] libmachine: Using API Version  1
I1016 17:56:08.702726   21514 main.go:141] libmachine: () Calling .SetConfigRaw
I1016 17:56:08.703211   21514 main.go:141] libmachine: () Calling .GetMachineName
I1016 17:56:08.703434   21514 main.go:141] libmachine: (functional-032307) Calling .GetState
I1016 17:56:08.705725   21514 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1016 17:56:08.705772   21514 main.go:141] libmachine: Launching plugin server for driver kvm2
I1016 17:56:08.720093   21514 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36139
I1016 17:56:08.720715   21514 main.go:141] libmachine: () Calling .GetVersion
I1016 17:56:08.721391   21514 main.go:141] libmachine: Using API Version  1
I1016 17:56:08.721418   21514 main.go:141] libmachine: () Calling .SetConfigRaw
I1016 17:56:08.721828   21514 main.go:141] libmachine: () Calling .GetMachineName
I1016 17:56:08.722049   21514 main.go:141] libmachine: (functional-032307) Calling .DriverName
I1016 17:56:08.722337   21514 ssh_runner.go:195] Run: systemctl --version
I1016 17:56:08.722374   21514 main.go:141] libmachine: (functional-032307) Calling .GetSSHHostname
I1016 17:56:08.725439   21514 main.go:141] libmachine: (functional-032307) DBG | domain functional-032307 has defined MAC address 52:54:00:27:c3:33 in network mk-functional-032307
I1016 17:56:08.725985   21514 main.go:141] libmachine: (functional-032307) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:c3:33", ip: ""} in network mk-functional-032307: {Iface:virbr1 ExpiryTime:2025-10-16 18:53:24 +0000 UTC Type:0 Mac:52:54:00:27:c3:33 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:functional-032307 Clientid:01:52:54:00:27:c3:33}
I1016 17:56:08.726009   21514 main.go:141] libmachine: (functional-032307) DBG | domain functional-032307 has defined IP address 192.168.39.31 and MAC address 52:54:00:27:c3:33 in network mk-functional-032307
I1016 17:56:08.726253   21514 main.go:141] libmachine: (functional-032307) Calling .GetSSHPort
I1016 17:56:08.726454   21514 main.go:141] libmachine: (functional-032307) Calling .GetSSHKeyPath
I1016 17:56:08.726618   21514 main.go:141] libmachine: (functional-032307) Calling .GetSSHUsername
I1016 17:56:08.726786   21514 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/functional-032307/id_rsa Username:docker}
I1016 17:56:08.822983   21514 ssh_runner.go:195] Run: sudo crictl images --output json
I1016 17:56:10.090098   21514 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.267084972s)
I1016 17:56:10.090419   21514 main.go:141] libmachine: Making call to close driver server
I1016 17:56:10.090432   21514 main.go:141] libmachine: (functional-032307) Calling .Close
I1016 17:56:10.090764   21514 main.go:141] libmachine: Successfully made call to close driver server
I1016 17:56:10.090783   21514 main.go:141] libmachine: Making call to close connection to plugin binary
I1016 17:56:10.090791   21514 main.go:141] libmachine: Making call to close driver server
I1016 17:56:10.090798   21514 main.go:141] libmachine: (functional-032307) Calling .Close
I1016 17:56:10.090811   21514 main.go:141] libmachine: (functional-032307) DBG | Closing plugin on server side
I1016 17:56:10.091030   21514 main.go:141] libmachine: Successfully made call to close driver server
I1016 17:56:10.091053   21514 main.go:141] libmachine: (functional-032307) DBG | Closing plugin on server side
I1016 17:56:10.091063   21514 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-032307 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ localhost/minikube-local-cache-test     │ functional-032307  │ a629c8e3d4960 │ 3.33kB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-032307  │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                      │ functional-032307  │ 6f9ef2a98aa8e │ 1.47MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-032307 image ls --format table --alsologtostderr:
I1016 17:56:14.306478   21681 out.go:360] Setting OutFile to fd 1 ...
I1016 17:56:14.306762   21681 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 17:56:14.306775   21681 out.go:374] Setting ErrFile to fd 2...
I1016 17:56:14.306783   21681 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 17:56:14.307101   21681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8816/.minikube/bin
I1016 17:56:14.307933   21681 config.go:182] Loaded profile config "functional-032307": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 17:56:14.308076   21681 config.go:182] Loaded profile config "functional-032307": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 17:56:14.308634   21681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1016 17:56:14.308700   21681 main.go:141] libmachine: Launching plugin server for driver kvm2
I1016 17:56:14.323384   21681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45557
I1016 17:56:14.324024   21681 main.go:141] libmachine: () Calling .GetVersion
I1016 17:56:14.324652   21681 main.go:141] libmachine: Using API Version  1
I1016 17:56:14.324694   21681 main.go:141] libmachine: () Calling .SetConfigRaw
I1016 17:56:14.325143   21681 main.go:141] libmachine: () Calling .GetMachineName
I1016 17:56:14.325338   21681 main.go:141] libmachine: (functional-032307) Calling .GetState
I1016 17:56:14.327594   21681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1016 17:56:14.327644   21681 main.go:141] libmachine: Launching plugin server for driver kvm2
I1016 17:56:14.341330   21681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34417
I1016 17:56:14.341871   21681 main.go:141] libmachine: () Calling .GetVersion
I1016 17:56:14.342411   21681 main.go:141] libmachine: Using API Version  1
I1016 17:56:14.342442   21681 main.go:141] libmachine: () Calling .SetConfigRaw
I1016 17:56:14.342801   21681 main.go:141] libmachine: () Calling .GetMachineName
I1016 17:56:14.342996   21681 main.go:141] libmachine: (functional-032307) Calling .DriverName
I1016 17:56:14.343214   21681 ssh_runner.go:195] Run: systemctl --version
I1016 17:56:14.343241   21681 main.go:141] libmachine: (functional-032307) Calling .GetSSHHostname
I1016 17:56:14.346323   21681 main.go:141] libmachine: (functional-032307) DBG | domain functional-032307 has defined MAC address 52:54:00:27:c3:33 in network mk-functional-032307
I1016 17:56:14.346748   21681 main.go:141] libmachine: (functional-032307) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:c3:33", ip: ""} in network mk-functional-032307: {Iface:virbr1 ExpiryTime:2025-10-16 18:53:24 +0000 UTC Type:0 Mac:52:54:00:27:c3:33 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:functional-032307 Clientid:01:52:54:00:27:c3:33}
I1016 17:56:14.346776   21681 main.go:141] libmachine: (functional-032307) DBG | domain functional-032307 has defined IP address 192.168.39.31 and MAC address 52:54:00:27:c3:33 in network mk-functional-032307
I1016 17:56:14.346952   21681 main.go:141] libmachine: (functional-032307) Calling .GetSSHPort
I1016 17:56:14.347109   21681 main.go:141] libmachine: (functional-032307) Calling .GetSSHKeyPath
I1016 17:56:14.347256   21681 main.go:141] libmachine: (functional-032307) Calling .GetSSHUsername
I1016 17:56:14.347416   21681 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/functional-032307/id_rsa Username:docker}
I1016 17:56:14.437476   21681 ssh_runner.go:195] Run: sudo crictl images --output json
I1016 17:56:14.525550   21681 main.go:141] libmachine: Making call to close driver server
I1016 17:56:14.525573   21681 main.go:141] libmachine: (functional-032307) Calling .Close
I1016 17:56:14.525855   21681 main.go:141] libmachine: Successfully made call to close driver server
I1016 17:56:14.525870   21681 main.go:141] libmachine: Making call to close connection to plugin binary
I1016 17:56:14.525878   21681 main.go:141] libmachine: Making call to close driver server
I1016 17:56:14.525885   21681 main.go:141] libmachine: (functional-032307) Calling .Close
I1016 17:56:14.526085   21681 main.go:141] libmachine: Successfully made call to close driver server
I1016 17:56:14.526112   21681 main.go:141] libmachine: Making call to close connection to plugin binary
I1016 17:56:14.526090   21681 main.go:141] libmachine: (functional-032307) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-032307 image ls --format json --alsologtostderr:
[{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e91
18e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-032307"],"size":"4943877"},{"id":"83b7220bd1b43334aa7fa6bcc9bdf936e83bb540376dc67d60ce9ff0af08d972","repoDigests":["docker.io/
library/ac58573d009bdffb892abc899cc686e7535baa2f45dc58dee3ee2a6aee85add6-tmp@sha256:2384ddc47766a70a709b17dda6800945c6d8694e181d6237a2ca804bf500f91a"],"repoTags":[],"size":"1466018"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha
256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615579"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c0202
89c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a629c8e3d4960f3444cc1af5f07c50c0b19a2610fb84feaf8d1d1ff809a2dbad","repoDigests":["localhost/minikube-local-cache-test@sha256:1c6be5903b0502c06d92a71f7827af161bcf32c0833992ed9cdb6496c3c44ce3"],"repoTags":["localhost/minikube-local-cache-test:functional-032307"],"size":"3330"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["r
egistry.k8s.io/pause:latest"],"size":"247077"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s
.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"6f9ef2a98aa8ee9e388b05405641a690404f53a4509536e1a239d3685ccb4cbd","repoDigests":["localhost/my-image@sha256:998e434392db58fe6180547b97a8b22a2bcd8b5f0c50b24c20825a8995e560d2"],"repoTags":["localhost/my-image:functional-032307"],"size":"1468599"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-032307 image ls --format json --alsologtostderr:
I1016 17:56:14.026855   21658 out.go:360] Setting OutFile to fd 1 ...
I1016 17:56:14.027148   21658 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 17:56:14.027160   21658 out.go:374] Setting ErrFile to fd 2...
I1016 17:56:14.027167   21658 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 17:56:14.027355   21658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8816/.minikube/bin
I1016 17:56:14.027956   21658 config.go:182] Loaded profile config "functional-032307": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 17:56:14.028065   21658 config.go:182] Loaded profile config "functional-032307": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 17:56:14.028497   21658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1016 17:56:14.028570   21658 main.go:141] libmachine: Launching plugin server for driver kvm2
I1016 17:56:14.042704   21658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38379
I1016 17:56:14.043288   21658 main.go:141] libmachine: () Calling .GetVersion
I1016 17:56:14.043881   21658 main.go:141] libmachine: Using API Version  1
I1016 17:56:14.043903   21658 main.go:141] libmachine: () Calling .SetConfigRaw
I1016 17:56:14.044274   21658 main.go:141] libmachine: () Calling .GetMachineName
I1016 17:56:14.044440   21658 main.go:141] libmachine: (functional-032307) Calling .GetState
I1016 17:56:14.046744   21658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1016 17:56:14.046796   21658 main.go:141] libmachine: Launching plugin server for driver kvm2
I1016 17:56:14.061335   21658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41029
I1016 17:56:14.061912   21658 main.go:141] libmachine: () Calling .GetVersion
I1016 17:56:14.062572   21658 main.go:141] libmachine: Using API Version  1
I1016 17:56:14.062608   21658 main.go:141] libmachine: () Calling .SetConfigRaw
I1016 17:56:14.062967   21658 main.go:141] libmachine: () Calling .GetMachineName
I1016 17:56:14.063219   21658 main.go:141] libmachine: (functional-032307) Calling .DriverName
I1016 17:56:14.063442   21658 ssh_runner.go:195] Run: systemctl --version
I1016 17:56:14.063469   21658 main.go:141] libmachine: (functional-032307) Calling .GetSSHHostname
I1016 17:56:14.067194   21658 main.go:141] libmachine: (functional-032307) DBG | domain functional-032307 has defined MAC address 52:54:00:27:c3:33 in network mk-functional-032307
I1016 17:56:14.067737   21658 main.go:141] libmachine: (functional-032307) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:c3:33", ip: ""} in network mk-functional-032307: {Iface:virbr1 ExpiryTime:2025-10-16 18:53:24 +0000 UTC Type:0 Mac:52:54:00:27:c3:33 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:functional-032307 Clientid:01:52:54:00:27:c3:33}
I1016 17:56:14.067776   21658 main.go:141] libmachine: (functional-032307) DBG | domain functional-032307 has defined IP address 192.168.39.31 and MAC address 52:54:00:27:c3:33 in network mk-functional-032307
I1016 17:56:14.067982   21658 main.go:141] libmachine: (functional-032307) Calling .GetSSHPort
I1016 17:56:14.068209   21658 main.go:141] libmachine: (functional-032307) Calling .GetSSHKeyPath
I1016 17:56:14.068346   21658 main.go:141] libmachine: (functional-032307) Calling .GetSSHUsername
I1016 17:56:14.068506   21658 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/functional-032307/id_rsa Username:docker}
I1016 17:56:14.164995   21658 ssh_runner.go:195] Run: sudo crictl images --output json
I1016 17:56:14.247135   21658 main.go:141] libmachine: Making call to close driver server
I1016 17:56:14.247146   21658 main.go:141] libmachine: (functional-032307) Calling .Close
I1016 17:56:14.247414   21658 main.go:141] libmachine: Successfully made call to close driver server
I1016 17:56:14.247431   21658 main.go:141] libmachine: Making call to close connection to plugin binary
I1016 17:56:14.247440   21658 main.go:141] libmachine: Making call to close driver server
I1016 17:56:14.247447   21658 main.go:141] libmachine: (functional-032307) Calling .Close
I1016 17:56:14.247788   21658 main.go:141] libmachine: Successfully made call to close driver server
I1016 17:56:14.247810   21658 main.go:141] libmachine: Making call to close connection to plugin binary
I1016 17:56:14.247788   21658 main.go:141] libmachine: (functional-032307) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-032307 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: a629c8e3d4960f3444cc1af5f07c50c0b19a2610fb84feaf8d1d1ff809a2dbad
repoDigests:
- localhost/minikube-local-cache-test@sha256:1c6be5903b0502c06d92a71f7827af161bcf32c0833992ed9cdb6496c3c44ce3
repoTags:
- localhost/minikube-local-cache-test:functional-032307
size: "3330"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-032307
size: "4943877"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-032307 image ls --format yaml --alsologtostderr:
I1016 17:56:10.144887   21539 out.go:360] Setting OutFile to fd 1 ...
I1016 17:56:10.145141   21539 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 17:56:10.145154   21539 out.go:374] Setting ErrFile to fd 2...
I1016 17:56:10.145159   21539 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 17:56:10.145341   21539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8816/.minikube/bin
I1016 17:56:10.145887   21539 config.go:182] Loaded profile config "functional-032307": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 17:56:10.145973   21539 config.go:182] Loaded profile config "functional-032307": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 17:56:10.146350   21539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1016 17:56:10.146421   21539 main.go:141] libmachine: Launching plugin server for driver kvm2
I1016 17:56:10.161438   21539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40123
I1016 17:56:10.161991   21539 main.go:141] libmachine: () Calling .GetVersion
I1016 17:56:10.162520   21539 main.go:141] libmachine: Using API Version  1
I1016 17:56:10.162546   21539 main.go:141] libmachine: () Calling .SetConfigRaw
I1016 17:56:10.163048   21539 main.go:141] libmachine: () Calling .GetMachineName
I1016 17:56:10.163315   21539 main.go:141] libmachine: (functional-032307) Calling .GetState
I1016 17:56:10.165844   21539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1016 17:56:10.165892   21539 main.go:141] libmachine: Launching plugin server for driver kvm2
I1016 17:56:10.180193   21539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41681
I1016 17:56:10.180667   21539 main.go:141] libmachine: () Calling .GetVersion
I1016 17:56:10.181188   21539 main.go:141] libmachine: Using API Version  1
I1016 17:56:10.181212   21539 main.go:141] libmachine: () Calling .SetConfigRaw
I1016 17:56:10.181613   21539 main.go:141] libmachine: () Calling .GetMachineName
I1016 17:56:10.181830   21539 main.go:141] libmachine: (functional-032307) Calling .DriverName
I1016 17:56:10.182051   21539 ssh_runner.go:195] Run: systemctl --version
I1016 17:56:10.182073   21539 main.go:141] libmachine: (functional-032307) Calling .GetSSHHostname
I1016 17:56:10.185465   21539 main.go:141] libmachine: (functional-032307) DBG | domain functional-032307 has defined MAC address 52:54:00:27:c3:33 in network mk-functional-032307
I1016 17:56:10.185983   21539 main.go:141] libmachine: (functional-032307) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:c3:33", ip: ""} in network mk-functional-032307: {Iface:virbr1 ExpiryTime:2025-10-16 18:53:24 +0000 UTC Type:0 Mac:52:54:00:27:c3:33 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:functional-032307 Clientid:01:52:54:00:27:c3:33}
I1016 17:56:10.186014   21539 main.go:141] libmachine: (functional-032307) DBG | domain functional-032307 has defined IP address 192.168.39.31 and MAC address 52:54:00:27:c3:33 in network mk-functional-032307
I1016 17:56:10.186202   21539 main.go:141] libmachine: (functional-032307) Calling .GetSSHPort
I1016 17:56:10.186402   21539 main.go:141] libmachine: (functional-032307) Calling .GetSSHKeyPath
I1016 17:56:10.186596   21539 main.go:141] libmachine: (functional-032307) Calling .GetSSHUsername
I1016 17:56:10.186758   21539 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/functional-032307/id_rsa Username:docker}
I1016 17:56:10.272574   21539 ssh_runner.go:195] Run: sudo crictl images --output json
I1016 17:56:10.316381   21539 main.go:141] libmachine: Making call to close driver server
I1016 17:56:10.316395   21539 main.go:141] libmachine: (functional-032307) Calling .Close
I1016 17:56:10.316710   21539 main.go:141] libmachine: Successfully made call to close driver server
I1016 17:56:10.316728   21539 main.go:141] libmachine: Making call to close connection to plugin binary
I1016 17:56:10.316738   21539 main.go:141] libmachine: Making call to close driver server
I1016 17:56:10.316745   21539 main.go:141] libmachine: (functional-032307) Calling .Close
I1016 17:56:10.316980   21539 main.go:141] libmachine: Successfully made call to close driver server
I1016 17:56:10.316994   21539 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032307 ssh pgrep buildkitd: exit status 1 (198.708663ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 image build -t localhost/my-image:functional-032307 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-032307 image build -t localhost/my-image:functional-032307 testdata/build --alsologtostderr: (3.18188737s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-032307 image build -t localhost/my-image:functional-032307 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 83b7220bd1b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-032307
--> 6f9ef2a98aa
Successfully tagged localhost/my-image:functional-032307
6f9ef2a98aa8ee9e388b05405641a690404f53a4509536e1a239d3685ccb4cbd
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-032307 image build -t localhost/my-image:functional-032307 testdata/build --alsologtostderr:
I1016 17:56:10.569438   21592 out.go:360] Setting OutFile to fd 1 ...
I1016 17:56:10.569731   21592 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 17:56:10.569741   21592 out.go:374] Setting ErrFile to fd 2...
I1016 17:56:10.569745   21592 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 17:56:10.569904   21592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8816/.minikube/bin
I1016 17:56:10.570448   21592 config.go:182] Loaded profile config "functional-032307": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 17:56:10.571052   21592 config.go:182] Loaded profile config "functional-032307": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 17:56:10.571397   21592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1016 17:56:10.571431   21592 main.go:141] libmachine: Launching plugin server for driver kvm2
I1016 17:56:10.585198   21592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34809
I1016 17:56:10.585786   21592 main.go:141] libmachine: () Calling .GetVersion
I1016 17:56:10.586351   21592 main.go:141] libmachine: Using API Version  1
I1016 17:56:10.586380   21592 main.go:141] libmachine: () Calling .SetConfigRaw
I1016 17:56:10.586716   21592 main.go:141] libmachine: () Calling .GetMachineName
I1016 17:56:10.586929   21592 main.go:141] libmachine: (functional-032307) Calling .GetState
I1016 17:56:10.588912   21592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1016 17:56:10.588953   21592 main.go:141] libmachine: Launching plugin server for driver kvm2
I1016 17:56:10.602647   21592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41825
I1016 17:56:10.603196   21592 main.go:141] libmachine: () Calling .GetVersion
I1016 17:56:10.603827   21592 main.go:141] libmachine: Using API Version  1
I1016 17:56:10.603859   21592 main.go:141] libmachine: () Calling .SetConfigRaw
I1016 17:56:10.604220   21592 main.go:141] libmachine: () Calling .GetMachineName
I1016 17:56:10.604392   21592 main.go:141] libmachine: (functional-032307) Calling .DriverName
I1016 17:56:10.604606   21592 ssh_runner.go:195] Run: systemctl --version
I1016 17:56:10.604633   21592 main.go:141] libmachine: (functional-032307) Calling .GetSSHHostname
I1016 17:56:10.607868   21592 main.go:141] libmachine: (functional-032307) DBG | domain functional-032307 has defined MAC address 52:54:00:27:c3:33 in network mk-functional-032307
I1016 17:56:10.608326   21592 main.go:141] libmachine: (functional-032307) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:c3:33", ip: ""} in network mk-functional-032307: {Iface:virbr1 ExpiryTime:2025-10-16 18:53:24 +0000 UTC Type:0 Mac:52:54:00:27:c3:33 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:functional-032307 Clientid:01:52:54:00:27:c3:33}
I1016 17:56:10.608353   21592 main.go:141] libmachine: (functional-032307) DBG | domain functional-032307 has defined IP address 192.168.39.31 and MAC address 52:54:00:27:c3:33 in network mk-functional-032307
I1016 17:56:10.608569   21592 main.go:141] libmachine: (functional-032307) Calling .GetSSHPort
I1016 17:56:10.608747   21592 main.go:141] libmachine: (functional-032307) Calling .GetSSHKeyPath
I1016 17:56:10.608943   21592 main.go:141] libmachine: (functional-032307) Calling .GetSSHUsername
I1016 17:56:10.609101   21592 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/functional-032307/id_rsa Username:docker}
I1016 17:56:10.691196   21592 build_images.go:161] Building image from path: /tmp/build.3308137437.tar
I1016 17:56:10.691257   21592 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1016 17:56:10.704084   21592 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3308137437.tar
I1016 17:56:10.709335   21592 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3308137437.tar: stat -c "%s %y" /var/lib/minikube/build/build.3308137437.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3308137437.tar': No such file or directory
I1016 17:56:10.709367   21592 ssh_runner.go:362] scp /tmp/build.3308137437.tar --> /var/lib/minikube/build/build.3308137437.tar (3072 bytes)
I1016 17:56:10.741208   21592 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3308137437
I1016 17:56:10.757700   21592 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3308137437 -xf /var/lib/minikube/build/build.3308137437.tar
I1016 17:56:10.771748   21592 crio.go:315] Building image: /var/lib/minikube/build/build.3308137437
I1016 17:56:10.771833   21592 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-032307 /var/lib/minikube/build/build.3308137437 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1016 17:56:13.654015   21592 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-032307 /var/lib/minikube/build/build.3308137437 --cgroup-manager=cgroupfs: (2.882140096s)
I1016 17:56:13.654097   21592 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3308137437
I1016 17:56:13.675073   21592 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3308137437.tar
I1016 17:56:13.695485   21592 build_images.go:217] Built localhost/my-image:functional-032307 from /tmp/build.3308137437.tar
I1016 17:56:13.695527   21592 build_images.go:133] succeeded building to: functional-032307
I1016 17:56:13.695534   21592 build_images.go:134] failed building to: 
I1016 17:56:13.695564   21592 main.go:141] libmachine: Making call to close driver server
I1016 17:56:13.695586   21592 main.go:141] libmachine: (functional-032307) Calling .Close
I1016 17:56:13.695889   21592 main.go:141] libmachine: (functional-032307) DBG | Closing plugin on server side
I1016 17:56:13.695971   21592 main.go:141] libmachine: Successfully made call to close driver server
I1016 17:56:13.696007   21592 main.go:141] libmachine: Making call to close connection to plugin binary
I1016 17:56:13.696022   21592 main.go:141] libmachine: Making call to close driver server
I1016 17:56:13.696034   21592 main.go:141] libmachine: (functional-032307) Calling .Close
I1016 17:56:13.696288   21592 main.go:141] libmachine: Successfully made call to close driver server
I1016 17:56:13.696302   21592 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.962375346s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-032307
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "323.688499ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "58.214155ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "300.174989ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "52.254599ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (21.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-032307 /tmp/TestFunctionalparallelMountCmdany-port1970186687/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760637342819731517" to /tmp/TestFunctionalparallelMountCmdany-port1970186687/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760637342819731517" to /tmp/TestFunctionalparallelMountCmdany-port1970186687/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760637342819731517" to /tmp/TestFunctionalparallelMountCmdany-port1970186687/001/test-1760637342819731517
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032307 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (204.94851ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1016 17:55:43.024992   12767 retry.go:31] will retry after 582.993883ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 16 17:55 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 16 17:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 16 17:55 test-1760637342819731517
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh cat /mount-9p/test-1760637342819731517
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-032307 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [6deeb7f2-2c37-4ead-90c0-41b1b59e18f6] Pending
helpers_test.go:352: "busybox-mount" [6deeb7f2-2c37-4ead-90c0-41b1b59e18f6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [6deeb7f2-2c37-4ead-90c0-41b1b59e18f6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [6deeb7f2-2c37-4ead-90c0-41b1b59e18f6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 19.008456493s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-032307 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh stat /mount-9p/created-by-test
I1016 17:56:03.644603   12767 detect.go:223] nested VM detected
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-032307 /tmp/TestFunctionalparallelMountCmdany-port1970186687/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (21.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 image load --daemon kicbase/echo-server:functional-032307 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-032307 image load --daemon kicbase/echo-server:functional-032307 --alsologtostderr: (1.340059499s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 image load --daemon kicbase/echo-server:functional-032307 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-032307
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 image load --daemon kicbase/echo-server:functional-032307 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 image save kicbase/echo-server:functional-032307 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-032307 image save kicbase/echo-server:functional-032307 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (7.130833548s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 image rm kicbase/echo-server:functional-032307 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-032307
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 image save --daemon kicbase/echo-server:functional-032307 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-032307
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-032307 /tmp/TestFunctionalparallelMountCmdspecific-port358627569/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032307 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (240.624151ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1016 17:56:04.723331   12767 retry.go:31] will retry after 592.614796ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-032307 /tmp/TestFunctionalparallelMountCmdspecific-port358627569/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032307 ssh "sudo umount -f /mount-9p": exit status 1 (185.112232ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-032307 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-032307 /tmp/TestFunctionalparallelMountCmdspecific-port358627569/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (14.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-032307 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-032307 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-6zjgr" [c2a368c4-1da6-4cec-98dd-697adae0c6d1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-6zjgr" [c2a368c4-1da6-4cec-98dd-697adae0c6d1] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 14.004335811s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (14.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-032307 /tmp/TestFunctionalparallelMountCmdVerifyCleanup904571586/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-032307 /tmp/TestFunctionalparallelMountCmdVerifyCleanup904571586/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-032307 /tmp/TestFunctionalparallelMountCmdVerifyCleanup904571586/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032307 ssh "findmnt -T" /mount1: exit status 1 (244.960859ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1016 17:56:06.613705   12767 retry.go:31] will retry after 394.817326ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-032307 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-032307 /tmp/TestFunctionalparallelMountCmdVerifyCleanup904571586/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-032307 /tmp/TestFunctionalparallelMountCmdVerifyCleanup904571586/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-032307 /tmp/TestFunctionalparallelMountCmdVerifyCleanup904571586/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-032307 service list: (1.259724979s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-032307 service list -o json: (1.274158879s)
functional_test.go:1504: Took "1.274257893s" to run "out/minikube-linux-amd64 -p functional-032307 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.31:30244
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-032307 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.31:30244
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-032307
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-032307
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-032307
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (198.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1016 17:57:54.284378   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:58:21.990249   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-666046 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m17.505970819s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (198.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-666046 kubectl -- rollout status deployment/busybox: (4.570318401s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 kubectl -- exec busybox-7b57f96db7-6spz7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 kubectl -- exec busybox-7b57f96db7-brv4l -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 kubectl -- exec busybox-7b57f96db7-g682x -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 kubectl -- exec busybox-7b57f96db7-6spz7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 kubectl -- exec busybox-7b57f96db7-brv4l -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 kubectl -- exec busybox-7b57f96db7-g682x -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 kubectl -- exec busybox-7b57f96db7-6spz7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 kubectl -- exec busybox-7b57f96db7-brv4l -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 kubectl -- exec busybox-7b57f96db7-g682x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 kubectl -- exec busybox-7b57f96db7-6spz7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 kubectl -- exec busybox-7b57f96db7-6spz7 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 kubectl -- exec busybox-7b57f96db7-brv4l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 kubectl -- exec busybox-7b57f96db7-brv4l -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 kubectl -- exec busybox-7b57f96db7-g682x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 kubectl -- exec busybox-7b57f96db7-g682x -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-666046 node add --alsologtostderr -v 5: (43.783766125s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-666046 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 cp testdata/cp-test.txt ha-666046:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 cp ha-666046:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2346764277/001/cp-test_ha-666046.txt
E1016 18:00:41.932360   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:00:41.938765   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:00:41.950132   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:00:41.971555   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:00:42.012985   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046 "sudo cat /home/docker/cp-test.txt"
E1016 18:00:42.094996   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 cp ha-666046:/home/docker/cp-test.txt ha-666046-m02:/home/docker/cp-test_ha-666046_ha-666046-m02.txt
E1016 18:00:42.256751   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046 "sudo cat /home/docker/cp-test.txt"
E1016 18:00:42.578444   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m02 "sudo cat /home/docker/cp-test_ha-666046_ha-666046-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 cp ha-666046:/home/docker/cp-test.txt ha-666046-m03:/home/docker/cp-test_ha-666046_ha-666046-m03.txt
E1016 18:00:43.219750   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m03 "sudo cat /home/docker/cp-test_ha-666046_ha-666046-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 cp ha-666046:/home/docker/cp-test.txt ha-666046-m04:/home/docker/cp-test_ha-666046_ha-666046-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m04 "sudo cat /home/docker/cp-test_ha-666046_ha-666046-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 cp testdata/cp-test.txt ha-666046-m02:/home/docker/cp-test.txt
E1016 18:00:44.501547   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 cp ha-666046-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2346764277/001/cp-test_ha-666046-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 cp ha-666046-m02:/home/docker/cp-test.txt ha-666046:/home/docker/cp-test_ha-666046-m02_ha-666046.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046 "sudo cat /home/docker/cp-test_ha-666046-m02_ha-666046.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 cp ha-666046-m02:/home/docker/cp-test.txt ha-666046-m03:/home/docker/cp-test_ha-666046-m02_ha-666046-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m03 "sudo cat /home/docker/cp-test_ha-666046-m02_ha-666046-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 cp ha-666046-m02:/home/docker/cp-test.txt ha-666046-m04:/home/docker/cp-test_ha-666046-m02_ha-666046-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m02 "sudo cat /home/docker/cp-test.txt"
E1016 18:00:47.063797   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m04 "sudo cat /home/docker/cp-test_ha-666046-m02_ha-666046-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 cp testdata/cp-test.txt ha-666046-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 cp ha-666046-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2346764277/001/cp-test_ha-666046-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 cp ha-666046-m03:/home/docker/cp-test.txt ha-666046:/home/docker/cp-test_ha-666046-m03_ha-666046.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046 "sudo cat /home/docker/cp-test_ha-666046-m03_ha-666046.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 cp ha-666046-m03:/home/docker/cp-test.txt ha-666046-m02:/home/docker/cp-test_ha-666046-m03_ha-666046-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m02 "sudo cat /home/docker/cp-test_ha-666046-m03_ha-666046-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 cp ha-666046-m03:/home/docker/cp-test.txt ha-666046-m04:/home/docker/cp-test_ha-666046-m03_ha-666046-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m04 "sudo cat /home/docker/cp-test_ha-666046-m03_ha-666046-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 cp testdata/cp-test.txt ha-666046-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 cp ha-666046-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2346764277/001/cp-test_ha-666046-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 cp ha-666046-m04:/home/docker/cp-test.txt ha-666046:/home/docker/cp-test_ha-666046-m04_ha-666046.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046 "sudo cat /home/docker/cp-test_ha-666046-m04_ha-666046.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 cp ha-666046-m04:/home/docker/cp-test.txt ha-666046-m02:/home/docker/cp-test_ha-666046-m04_ha-666046-m02.txt
E1016 18:00:52.185577   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m02 "sudo cat /home/docker/cp-test_ha-666046-m04_ha-666046-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 cp ha-666046-m04:/home/docker/cp-test.txt ha-666046-m03:/home/docker/cp-test_ha-666046-m04_ha-666046-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 ssh -n ha-666046-m03 "sudo cat /home/docker/cp-test_ha-666046-m04_ha-666046-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (82.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 node stop m02 --alsologtostderr -v 5
E1016 18:01:02.427287   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:01:22.909245   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:02:03.871893   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-666046 node stop m02 --alsologtostderr -v 5: (1m22.235162714s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-666046 status --alsologtostderr -v 5: exit status 7 (670.727935ms)

                                                
                                                
-- stdout --
	ha-666046
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-666046-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-666046-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-666046-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:02:15.738774   26415 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:02:15.739029   26415 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:02:15.739038   26415 out.go:374] Setting ErrFile to fd 2...
	I1016 18:02:15.739042   26415 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:02:15.739242   26415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8816/.minikube/bin
	I1016 18:02:15.739432   26415 out.go:368] Setting JSON to false
	I1016 18:02:15.739461   26415 mustload.go:65] Loading cluster: ha-666046
	I1016 18:02:15.739537   26415 notify.go:220] Checking for updates...
	I1016 18:02:15.739969   26415 config.go:182] Loaded profile config "ha-666046": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:02:15.739990   26415 status.go:174] checking status of ha-666046 ...
	I1016 18:02:15.740580   26415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:02:15.740631   26415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:02:15.755449   26415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41907
	I1016 18:02:15.755880   26415 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:02:15.756471   26415 main.go:141] libmachine: Using API Version  1
	I1016 18:02:15.756509   26415 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:02:15.756859   26415 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:02:15.757067   26415 main.go:141] libmachine: (ha-666046) Calling .GetState
	I1016 18:02:15.759187   26415 status.go:371] ha-666046 host status = "Running" (err=<nil>)
	I1016 18:02:15.759206   26415 host.go:66] Checking if "ha-666046" exists ...
	I1016 18:02:15.759492   26415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:02:15.759533   26415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:02:15.772779   26415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34219
	I1016 18:02:15.773145   26415 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:02:15.773507   26415 main.go:141] libmachine: Using API Version  1
	I1016 18:02:15.773552   26415 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:02:15.773882   26415 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:02:15.774048   26415 main.go:141] libmachine: (ha-666046) Calling .GetIP
	I1016 18:02:15.777314   26415 main.go:141] libmachine: (ha-666046) DBG | domain ha-666046 has defined MAC address 52:54:00:d4:a1:04 in network mk-ha-666046
	I1016 18:02:15.777739   26415 main.go:141] libmachine: (ha-666046) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a1:04", ip: ""} in network mk-ha-666046: {Iface:virbr1 ExpiryTime:2025-10-16 18:56:43 +0000 UTC Type:0 Mac:52:54:00:d4:a1:04 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-666046 Clientid:01:52:54:00:d4:a1:04}
	I1016 18:02:15.777770   26415 main.go:141] libmachine: (ha-666046) DBG | domain ha-666046 has defined IP address 192.168.39.7 and MAC address 52:54:00:d4:a1:04 in network mk-ha-666046
	I1016 18:02:15.777931   26415 host.go:66] Checking if "ha-666046" exists ...
	I1016 18:02:15.778302   26415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:02:15.778340   26415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:02:15.792917   26415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35167
	I1016 18:02:15.793447   26415 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:02:15.794011   26415 main.go:141] libmachine: Using API Version  1
	I1016 18:02:15.794046   26415 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:02:15.794456   26415 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:02:15.794677   26415 main.go:141] libmachine: (ha-666046) Calling .DriverName
	I1016 18:02:15.794901   26415 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:02:15.794927   26415 main.go:141] libmachine: (ha-666046) Calling .GetSSHHostname
	I1016 18:02:15.798804   26415 main.go:141] libmachine: (ha-666046) DBG | domain ha-666046 has defined MAC address 52:54:00:d4:a1:04 in network mk-ha-666046
	I1016 18:02:15.799326   26415 main.go:141] libmachine: (ha-666046) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a1:04", ip: ""} in network mk-ha-666046: {Iface:virbr1 ExpiryTime:2025-10-16 18:56:43 +0000 UTC Type:0 Mac:52:54:00:d4:a1:04 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-666046 Clientid:01:52:54:00:d4:a1:04}
	I1016 18:02:15.799361   26415 main.go:141] libmachine: (ha-666046) DBG | domain ha-666046 has defined IP address 192.168.39.7 and MAC address 52:54:00:d4:a1:04 in network mk-ha-666046
	I1016 18:02:15.799538   26415 main.go:141] libmachine: (ha-666046) Calling .GetSSHPort
	I1016 18:02:15.799716   26415 main.go:141] libmachine: (ha-666046) Calling .GetSSHKeyPath
	I1016 18:02:15.799889   26415 main.go:141] libmachine: (ha-666046) Calling .GetSSHUsername
	I1016 18:02:15.800064   26415 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/ha-666046/id_rsa Username:docker}
	I1016 18:02:15.881212   26415 ssh_runner.go:195] Run: systemctl --version
	I1016 18:02:15.888283   26415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:02:15.910975   26415 kubeconfig.go:125] found "ha-666046" server: "https://192.168.39.254:8443"
	I1016 18:02:15.911009   26415 api_server.go:166] Checking apiserver status ...
	I1016 18:02:15.911054   26415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:02:15.932095   26415 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1422/cgroup
	W1016 18:02:15.945554   26415 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1422/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:02:15.945619   26415 ssh_runner.go:195] Run: ls
	I1016 18:02:15.953403   26415 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1016 18:02:15.960969   26415 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1016 18:02:15.961002   26415 status.go:463] ha-666046 apiserver status = Running (err=<nil>)
	I1016 18:02:15.961015   26415 status.go:176] ha-666046 status: &{Name:ha-666046 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 18:02:15.961033   26415 status.go:174] checking status of ha-666046-m02 ...
	I1016 18:02:15.961510   26415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:02:15.961554   26415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:02:15.974999   26415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37745
	I1016 18:02:15.975493   26415 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:02:15.975979   26415 main.go:141] libmachine: Using API Version  1
	I1016 18:02:15.975999   26415 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:02:15.976368   26415 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:02:15.976569   26415 main.go:141] libmachine: (ha-666046-m02) Calling .GetState
	I1016 18:02:15.978422   26415 status.go:371] ha-666046-m02 host status = "Stopped" (err=<nil>)
	I1016 18:02:15.978435   26415 status.go:384] host is not running, skipping remaining checks
	I1016 18:02:15.978440   26415 status.go:176] ha-666046-m02 status: &{Name:ha-666046-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 18:02:15.978454   26415 status.go:174] checking status of ha-666046-m03 ...
	I1016 18:02:15.978759   26415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:02:15.978803   26415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:02:15.991608   26415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33833
	I1016 18:02:15.992078   26415 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:02:15.992554   26415 main.go:141] libmachine: Using API Version  1
	I1016 18:02:15.992590   26415 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:02:15.992956   26415 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:02:15.993142   26415 main.go:141] libmachine: (ha-666046-m03) Calling .GetState
	I1016 18:02:15.994811   26415 status.go:371] ha-666046-m03 host status = "Running" (err=<nil>)
	I1016 18:02:15.994828   26415 host.go:66] Checking if "ha-666046-m03" exists ...
	I1016 18:02:15.995102   26415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:02:15.995155   26415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:02:16.008422   26415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I1016 18:02:16.008865   26415 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:02:16.009318   26415 main.go:141] libmachine: Using API Version  1
	I1016 18:02:16.009350   26415 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:02:16.009736   26415 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:02:16.009906   26415 main.go:141] libmachine: (ha-666046-m03) Calling .GetIP
	I1016 18:02:16.013922   26415 main.go:141] libmachine: (ha-666046-m03) DBG | domain ha-666046-m03 has defined MAC address 52:54:00:4b:5f:6f in network mk-ha-666046
	I1016 18:02:16.014561   26415 main.go:141] libmachine: (ha-666046-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:5f:6f", ip: ""} in network mk-ha-666046: {Iface:virbr1 ExpiryTime:2025-10-16 18:58:45 +0000 UTC Type:0 Mac:52:54:00:4b:5f:6f Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-666046-m03 Clientid:01:52:54:00:4b:5f:6f}
	I1016 18:02:16.014603   26415 main.go:141] libmachine: (ha-666046-m03) DBG | domain ha-666046-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:4b:5f:6f in network mk-ha-666046
	I1016 18:02:16.014763   26415 host.go:66] Checking if "ha-666046-m03" exists ...
	I1016 18:02:16.015196   26415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:02:16.015240   26415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:02:16.031137   26415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34971
	I1016 18:02:16.031774   26415 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:02:16.032261   26415 main.go:141] libmachine: Using API Version  1
	I1016 18:02:16.032285   26415 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:02:16.032726   26415 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:02:16.032931   26415 main.go:141] libmachine: (ha-666046-m03) Calling .DriverName
	I1016 18:02:16.033163   26415 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:02:16.033192   26415 main.go:141] libmachine: (ha-666046-m03) Calling .GetSSHHostname
	I1016 18:02:16.037503   26415 main.go:141] libmachine: (ha-666046-m03) DBG | domain ha-666046-m03 has defined MAC address 52:54:00:4b:5f:6f in network mk-ha-666046
	I1016 18:02:16.038091   26415 main.go:141] libmachine: (ha-666046-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:5f:6f", ip: ""} in network mk-ha-666046: {Iface:virbr1 ExpiryTime:2025-10-16 18:58:45 +0000 UTC Type:0 Mac:52:54:00:4b:5f:6f Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-666046-m03 Clientid:01:52:54:00:4b:5f:6f}
	I1016 18:02:16.038132   26415 main.go:141] libmachine: (ha-666046-m03) DBG | domain ha-666046-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:4b:5f:6f in network mk-ha-666046
	I1016 18:02:16.038409   26415 main.go:141] libmachine: (ha-666046-m03) Calling .GetSSHPort
	I1016 18:02:16.038580   26415 main.go:141] libmachine: (ha-666046-m03) Calling .GetSSHKeyPath
	I1016 18:02:16.038744   26415 main.go:141] libmachine: (ha-666046-m03) Calling .GetSSHUsername
	I1016 18:02:16.038895   26415 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/ha-666046-m03/id_rsa Username:docker}
	I1016 18:02:16.126989   26415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:02:16.148985   26415 kubeconfig.go:125] found "ha-666046" server: "https://192.168.39.254:8443"
	I1016 18:02:16.149017   26415 api_server.go:166] Checking apiserver status ...
	I1016 18:02:16.149055   26415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:02:16.169431   26415 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1774/cgroup
	W1016 18:02:16.181731   26415 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1774/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:02:16.181788   26415 ssh_runner.go:195] Run: ls
	I1016 18:02:16.187465   26415 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1016 18:02:16.193340   26415 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1016 18:02:16.193378   26415 status.go:463] ha-666046-m03 apiserver status = Running (err=<nil>)
	I1016 18:02:16.193386   26415 status.go:176] ha-666046-m03 status: &{Name:ha-666046-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 18:02:16.193402   26415 status.go:174] checking status of ha-666046-m04 ...
	I1016 18:02:16.193688   26415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:02:16.193729   26415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:02:16.208499   26415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33407
	I1016 18:02:16.209022   26415 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:02:16.209586   26415 main.go:141] libmachine: Using API Version  1
	I1016 18:02:16.209638   26415 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:02:16.210048   26415 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:02:16.210251   26415 main.go:141] libmachine: (ha-666046-m04) Calling .GetState
	I1016 18:02:16.212073   26415 status.go:371] ha-666046-m04 host status = "Running" (err=<nil>)
	I1016 18:02:16.212088   26415 host.go:66] Checking if "ha-666046-m04" exists ...
	I1016 18:02:16.212478   26415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:02:16.212526   26415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:02:16.225716   26415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I1016 18:02:16.226154   26415 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:02:16.226642   26415 main.go:141] libmachine: Using API Version  1
	I1016 18:02:16.226670   26415 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:02:16.227046   26415 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:02:16.227294   26415 main.go:141] libmachine: (ha-666046-m04) Calling .GetIP
	I1016 18:02:16.230077   26415 main.go:141] libmachine: (ha-666046-m04) DBG | domain ha-666046-m04 has defined MAC address 52:54:00:d9:99:79 in network mk-ha-666046
	I1016 18:02:16.230499   26415 main.go:141] libmachine: (ha-666046-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:99:79", ip: ""} in network mk-ha-666046: {Iface:virbr1 ExpiryTime:2025-10-16 19:00:11 +0000 UTC Type:0 Mac:52:54:00:d9:99:79 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-666046-m04 Clientid:01:52:54:00:d9:99:79}
	I1016 18:02:16.230524   26415 main.go:141] libmachine: (ha-666046-m04) DBG | domain ha-666046-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:d9:99:79 in network mk-ha-666046
	I1016 18:02:16.230709   26415 host.go:66] Checking if "ha-666046-m04" exists ...
	I1016 18:02:16.231030   26415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:02:16.231074   26415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:02:16.244930   26415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41091
	I1016 18:02:16.245371   26415 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:02:16.245915   26415 main.go:141] libmachine: Using API Version  1
	I1016 18:02:16.245934   26415 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:02:16.246343   26415 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:02:16.246527   26415 main.go:141] libmachine: (ha-666046-m04) Calling .DriverName
	I1016 18:02:16.246711   26415 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:02:16.246729   26415 main.go:141] libmachine: (ha-666046-m04) Calling .GetSSHHostname
	I1016 18:02:16.249842   26415 main.go:141] libmachine: (ha-666046-m04) DBG | domain ha-666046-m04 has defined MAC address 52:54:00:d9:99:79 in network mk-ha-666046
	I1016 18:02:16.250493   26415 main.go:141] libmachine: (ha-666046-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:99:79", ip: ""} in network mk-ha-666046: {Iface:virbr1 ExpiryTime:2025-10-16 19:00:11 +0000 UTC Type:0 Mac:52:54:00:d9:99:79 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-666046-m04 Clientid:01:52:54:00:d9:99:79}
	I1016 18:02:16.250512   26415 main.go:141] libmachine: (ha-666046-m04) DBG | domain ha-666046-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:d9:99:79 in network mk-ha-666046
	I1016 18:02:16.250745   26415 main.go:141] libmachine: (ha-666046-m04) Calling .GetSSHPort
	I1016 18:02:16.250916   26415 main.go:141] libmachine: (ha-666046-m04) Calling .GetSSHKeyPath
	I1016 18:02:16.251089   26415 main.go:141] libmachine: (ha-666046-m04) Calling .GetSSHUsername
	I1016 18:02:16.251238   26415 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/ha-666046-m04/id_rsa Username:docker}
	I1016 18:02:16.339788   26415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:02:16.360686   26415 status.go:176] ha-666046-m04 status: &{Name:ha-666046-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (82.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (35.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-666046 node start m02 --alsologtostderr -v 5: (34.830796992s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-666046 status --alsologtostderr -v 5: (1.034196407s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (35.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.077565051s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (371.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 stop --alsologtostderr -v 5
E1016 18:02:54.283270   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:03:25.794288   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:05:41.933258   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:06:09.637446   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-666046 stop --alsologtostderr -v 5: (4m6.504726933s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 start --wait true --alsologtostderr -v 5
E1016 18:07:54.283673   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-666046 start --wait true --alsologtostderr -v 5: (2m5.160477646s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (371.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 node delete m03 --alsologtostderr -v 5
E1016 18:09:17.353389   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-666046 node delete m03 --alsologtostderr -v 5: (17.619590388s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (257.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 stop --alsologtostderr -v 5
E1016 18:10:41.938318   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:12:54.283701   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-666046 stop --alsologtostderr -v 5: (4m17.727992894s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-666046 status --alsologtostderr -v 5: exit status 7 (98.744871ms)

                                                
                                                
-- stdout --
	ha-666046
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-666046-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-666046-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:13:42.623934   30371 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:13:42.624189   30371 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:13:42.624203   30371 out.go:374] Setting ErrFile to fd 2...
	I1016 18:13:42.624207   30371 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:13:42.624364   30371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8816/.minikube/bin
	I1016 18:13:42.624507   30371 out.go:368] Setting JSON to false
	I1016 18:13:42.624531   30371 mustload.go:65] Loading cluster: ha-666046
	I1016 18:13:42.624624   30371 notify.go:220] Checking for updates...
	I1016 18:13:42.624968   30371 config.go:182] Loaded profile config "ha-666046": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:13:42.624988   30371 status.go:174] checking status of ha-666046 ...
	I1016 18:13:42.625466   30371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:13:42.625531   30371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:13:42.640324   30371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34093
	I1016 18:13:42.640746   30371 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:13:42.641291   30371 main.go:141] libmachine: Using API Version  1
	I1016 18:13:42.641320   30371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:13:42.641698   30371 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:13:42.641883   30371 main.go:141] libmachine: (ha-666046) Calling .GetState
	I1016 18:13:42.643709   30371 status.go:371] ha-666046 host status = "Stopped" (err=<nil>)
	I1016 18:13:42.643723   30371 status.go:384] host is not running, skipping remaining checks
	I1016 18:13:42.643728   30371 status.go:176] ha-666046 status: &{Name:ha-666046 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 18:13:42.643751   30371 status.go:174] checking status of ha-666046-m02 ...
	I1016 18:13:42.644048   30371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:13:42.644090   30371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:13:42.657348   30371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36639
	I1016 18:13:42.657854   30371 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:13:42.658335   30371 main.go:141] libmachine: Using API Version  1
	I1016 18:13:42.658360   30371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:13:42.658718   30371 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:13:42.658899   30371 main.go:141] libmachine: (ha-666046-m02) Calling .GetState
	I1016 18:13:42.660482   30371 status.go:371] ha-666046-m02 host status = "Stopped" (err=<nil>)
	I1016 18:13:42.660496   30371 status.go:384] host is not running, skipping remaining checks
	I1016 18:13:42.660503   30371 status.go:176] ha-666046-m02 status: &{Name:ha-666046-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 18:13:42.660522   30371 status.go:174] checking status of ha-666046-m04 ...
	I1016 18:13:42.660819   30371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:13:42.660863   30371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:13:42.673789   30371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39815
	I1016 18:13:42.674268   30371 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:13:42.674687   30371 main.go:141] libmachine: Using API Version  1
	I1016 18:13:42.674704   30371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:13:42.675032   30371 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:13:42.675262   30371 main.go:141] libmachine: (ha-666046-m04) Calling .GetState
	I1016 18:13:42.677044   30371 status.go:371] ha-666046-m04 host status = "Stopped" (err=<nil>)
	I1016 18:13:42.677060   30371 status.go:384] host is not running, skipping remaining checks
	I1016 18:13:42.677067   30371 status.go:176] ha-666046-m04 status: &{Name:ha-666046-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (257.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (115.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-666046 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m54.504415646s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (115.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 node add --control-plane --alsologtostderr -v 5
E1016 18:15:41.932154   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-666046 node add --control-plane --alsologtostderr -v 5: (1m14.056804574s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-666046 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (84.34s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-783658 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1016 18:17:04.999311   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:17:54.284849   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-783658 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m24.343824567s)
--- PASS: TestJSONOutput/start/Command (84.34s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-783658 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-783658 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-783658 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-783658 --output=json --user=testUser: (6.859119382s)
--- PASS: TestJSONOutput/stop/Command (6.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-184794 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-184794 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (60.90867ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5c995b59-d26f-400f-9fc9-deed35abeae6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-184794] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f3984dc-c5f9-4a37-9fc0-dce2aec47282","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21738"}}
	{"specversion":"1.0","id":"d7872e99-d750-45e2-95d1-cad71de71800","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"54c78f9c-dcab-487a-a9e0-383e308e6e2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21738-8816/kubeconfig"}}
	{"specversion":"1.0","id":"87003f9e-5d01-40c8-9e3e-7140d36d34a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8816/.minikube"}}
	{"specversion":"1.0","id":"3c2601fe-60dc-4ea3-85dc-b0e441d237ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4cfb2468-8906-427c-a9c8-e4a6a486de09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9dec54f2-7896-44ab-bd61-256d62f4ed06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-184794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-184794
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (80.5s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-342691 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-342691 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.120760788s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-344870 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-344870 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.59360736s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-342691
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-344870
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-344870" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-344870
helpers_test.go:175: Cleaning up "first-342691" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-342691
--- PASS: TestMinikubeProfile (80.50s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-324547 --memory=3072 --mount-string /tmp/TestMountStartserial3845336812/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-324547 --memory=3072 --mount-string /tmp/TestMountStartserial3845336812/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (19.402307439s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-324547 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-324547 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (21.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-340659 --memory=3072 --mount-string /tmp/TestMountStartserial3845336812/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-340659 --memory=3072 --mount-string /tmp/TestMountStartserial3845336812/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (20.688105156s)
--- PASS: TestMountStart/serial/StartWithMountSecond (21.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-340659 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-340659 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-324547 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-340659 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-340659 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-340659
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-340659: (1.243518862s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.34s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-340659
E1016 18:20:41.936340   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-340659: (18.334905887s)
--- PASS: TestMountStart/serial/RestartStopped (19.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-340659 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-340659 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (97.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-225382 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-225382 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m36.86068682s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (97.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225382 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225382 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-225382 -- rollout status deployment/busybox: (4.640983128s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225382 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225382 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225382 -- exec busybox-7b57f96db7-dksxm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225382 -- exec busybox-7b57f96db7-md7jm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225382 -- exec busybox-7b57f96db7-dksxm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225382 -- exec busybox-7b57f96db7-md7jm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225382 -- exec busybox-7b57f96db7-dksxm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225382 -- exec busybox-7b57f96db7-md7jm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.16s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225382 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225382 -- exec busybox-7b57f96db7-dksxm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225382 -- exec busybox-7b57f96db7-dksxm -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225382 -- exec busybox-7b57f96db7-md7jm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225382 -- exec busybox-7b57f96db7-md7jm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-225382 -v=5 --alsologtostderr
E1016 18:22:54.284294   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-225382 -v=5 --alsologtostderr: (40.742728438s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.32s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-225382 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 cp testdata/cp-test.txt multinode-225382:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 ssh -n multinode-225382 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 cp multinode-225382:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1571265937/001/cp-test_multinode-225382.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 ssh -n multinode-225382 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 cp multinode-225382:/home/docker/cp-test.txt multinode-225382-m02:/home/docker/cp-test_multinode-225382_multinode-225382-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 ssh -n multinode-225382 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 ssh -n multinode-225382-m02 "sudo cat /home/docker/cp-test_multinode-225382_multinode-225382-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 cp multinode-225382:/home/docker/cp-test.txt multinode-225382-m03:/home/docker/cp-test_multinode-225382_multinode-225382-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 ssh -n multinode-225382 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 ssh -n multinode-225382-m03 "sudo cat /home/docker/cp-test_multinode-225382_multinode-225382-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 cp testdata/cp-test.txt multinode-225382-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 ssh -n multinode-225382-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 cp multinode-225382-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1571265937/001/cp-test_multinode-225382-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 ssh -n multinode-225382-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 cp multinode-225382-m02:/home/docker/cp-test.txt multinode-225382:/home/docker/cp-test_multinode-225382-m02_multinode-225382.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 ssh -n multinode-225382-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 ssh -n multinode-225382 "sudo cat /home/docker/cp-test_multinode-225382-m02_multinode-225382.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 cp multinode-225382-m02:/home/docker/cp-test.txt multinode-225382-m03:/home/docker/cp-test_multinode-225382-m02_multinode-225382-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 ssh -n multinode-225382-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 ssh -n multinode-225382-m03 "sudo cat /home/docker/cp-test_multinode-225382-m02_multinode-225382-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 cp testdata/cp-test.txt multinode-225382-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 ssh -n multinode-225382-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 cp multinode-225382-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1571265937/001/cp-test_multinode-225382-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 ssh -n multinode-225382-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 cp multinode-225382-m03:/home/docker/cp-test.txt multinode-225382:/home/docker/cp-test_multinode-225382-m03_multinode-225382.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 ssh -n multinode-225382-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 ssh -n multinode-225382 "sudo cat /home/docker/cp-test_multinode-225382-m03_multinode-225382.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 cp multinode-225382-m03:/home/docker/cp-test.txt multinode-225382-m02:/home/docker/cp-test_multinode-225382-m03_multinode-225382-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 ssh -n multinode-225382-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 ssh -n multinode-225382-m02 "sudo cat /home/docker/cp-test_multinode-225382-m03_multinode-225382-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-225382 node stop m03: (1.527563551s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-225382 status: exit status 7 (421.418142ms)

                                                
                                                
-- stdout --
	multinode-225382
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-225382-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-225382-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-225382 status --alsologtostderr: exit status 7 (429.181644ms)

                                                
                                                
-- stdout --
	multinode-225382
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-225382-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-225382-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:23:32.702538   38435 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:23:32.702830   38435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:23:32.702840   38435 out.go:374] Setting ErrFile to fd 2...
	I1016 18:23:32.702844   38435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:23:32.703079   38435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8816/.minikube/bin
	I1016 18:23:32.703344   38435 out.go:368] Setting JSON to false
	I1016 18:23:32.703375   38435 mustload.go:65] Loading cluster: multinode-225382
	I1016 18:23:32.703500   38435 notify.go:220] Checking for updates...
	I1016 18:23:32.703833   38435 config.go:182] Loaded profile config "multinode-225382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:23:32.703849   38435 status.go:174] checking status of multinode-225382 ...
	I1016 18:23:32.704300   38435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:23:32.704345   38435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:23:32.723677   38435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38599
	I1016 18:23:32.724125   38435 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:23:32.724695   38435 main.go:141] libmachine: Using API Version  1
	I1016 18:23:32.724740   38435 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:23:32.725096   38435 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:23:32.725300   38435 main.go:141] libmachine: (multinode-225382) Calling .GetState
	I1016 18:23:32.727205   38435 status.go:371] multinode-225382 host status = "Running" (err=<nil>)
	I1016 18:23:32.727223   38435 host.go:66] Checking if "multinode-225382" exists ...
	I1016 18:23:32.727534   38435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:23:32.727571   38435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:23:32.741313   38435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38217
	I1016 18:23:32.741727   38435 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:23:32.742192   38435 main.go:141] libmachine: Using API Version  1
	I1016 18:23:32.742214   38435 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:23:32.742565   38435 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:23:32.742776   38435 main.go:141] libmachine: (multinode-225382) Calling .GetIP
	I1016 18:23:32.745558   38435 main.go:141] libmachine: (multinode-225382) DBG | domain multinode-225382 has defined MAC address 52:54:00:7d:4d:e0 in network mk-multinode-225382
	I1016 18:23:32.746049   38435 main.go:141] libmachine: (multinode-225382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:4d:e0", ip: ""} in network mk-multinode-225382: {Iface:virbr1 ExpiryTime:2025-10-16 19:21:12 +0000 UTC Type:0 Mac:52:54:00:7d:4d:e0 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:multinode-225382 Clientid:01:52:54:00:7d:4d:e0}
	I1016 18:23:32.746079   38435 main.go:141] libmachine: (multinode-225382) DBG | domain multinode-225382 has defined IP address 192.168.39.80 and MAC address 52:54:00:7d:4d:e0 in network mk-multinode-225382
	I1016 18:23:32.746280   38435 host.go:66] Checking if "multinode-225382" exists ...
	I1016 18:23:32.746612   38435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:23:32.746653   38435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:23:32.760474   38435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36413
	I1016 18:23:32.760966   38435 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:23:32.761499   38435 main.go:141] libmachine: Using API Version  1
	I1016 18:23:32.761525   38435 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:23:32.761874   38435 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:23:32.762064   38435 main.go:141] libmachine: (multinode-225382) Calling .DriverName
	I1016 18:23:32.762299   38435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:23:32.762325   38435 main.go:141] libmachine: (multinode-225382) Calling .GetSSHHostname
	I1016 18:23:32.764987   38435 main.go:141] libmachine: (multinode-225382) DBG | domain multinode-225382 has defined MAC address 52:54:00:7d:4d:e0 in network mk-multinode-225382
	I1016 18:23:32.765393   38435 main.go:141] libmachine: (multinode-225382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:4d:e0", ip: ""} in network mk-multinode-225382: {Iface:virbr1 ExpiryTime:2025-10-16 19:21:12 +0000 UTC Type:0 Mac:52:54:00:7d:4d:e0 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:multinode-225382 Clientid:01:52:54:00:7d:4d:e0}
	I1016 18:23:32.765410   38435 main.go:141] libmachine: (multinode-225382) DBG | domain multinode-225382 has defined IP address 192.168.39.80 and MAC address 52:54:00:7d:4d:e0 in network mk-multinode-225382
	I1016 18:23:32.765613   38435 main.go:141] libmachine: (multinode-225382) Calling .GetSSHPort
	I1016 18:23:32.765792   38435 main.go:141] libmachine: (multinode-225382) Calling .GetSSHKeyPath
	I1016 18:23:32.765939   38435 main.go:141] libmachine: (multinode-225382) Calling .GetSSHUsername
	I1016 18:23:32.766084   38435 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/multinode-225382/id_rsa Username:docker}
	I1016 18:23:32.850057   38435 ssh_runner.go:195] Run: systemctl --version
	I1016 18:23:32.856776   38435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:23:32.873297   38435 kubeconfig.go:125] found "multinode-225382" server: "https://192.168.39.80:8443"
	I1016 18:23:32.873340   38435 api_server.go:166] Checking apiserver status ...
	I1016 18:23:32.873371   38435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:23:32.891630   38435 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup
	W1016 18:23:32.902242   38435 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:23:32.902304   38435 ssh_runner.go:195] Run: ls
	I1016 18:23:32.907208   38435 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I1016 18:23:32.911573   38435 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I1016 18:23:32.911595   38435 status.go:463] multinode-225382 apiserver status = Running (err=<nil>)
	I1016 18:23:32.911615   38435 status.go:176] multinode-225382 status: &{Name:multinode-225382 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 18:23:32.911631   38435 status.go:174] checking status of multinode-225382-m02 ...
	I1016 18:23:32.911914   38435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:23:32.911943   38435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:23:32.925556   38435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36783
	I1016 18:23:32.926018   38435 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:23:32.926682   38435 main.go:141] libmachine: Using API Version  1
	I1016 18:23:32.926707   38435 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:23:32.927064   38435 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:23:32.927276   38435 main.go:141] libmachine: (multinode-225382-m02) Calling .GetState
	I1016 18:23:32.929080   38435 status.go:371] multinode-225382-m02 host status = "Running" (err=<nil>)
	I1016 18:23:32.929097   38435 host.go:66] Checking if "multinode-225382-m02" exists ...
	I1016 18:23:32.929416   38435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:23:32.929459   38435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:23:32.943449   38435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46605
	I1016 18:23:32.943940   38435 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:23:32.944520   38435 main.go:141] libmachine: Using API Version  1
	I1016 18:23:32.944545   38435 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:23:32.944878   38435 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:23:32.945082   38435 main.go:141] libmachine: (multinode-225382-m02) Calling .GetIP
	I1016 18:23:32.947991   38435 main.go:141] libmachine: (multinode-225382-m02) DBG | domain multinode-225382-m02 has defined MAC address 52:54:00:8d:45:1e in network mk-multinode-225382
	I1016 18:23:32.948469   38435 main.go:141] libmachine: (multinode-225382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:45:1e", ip: ""} in network mk-multinode-225382: {Iface:virbr1 ExpiryTime:2025-10-16 19:22:05 +0000 UTC Type:0 Mac:52:54:00:8d:45:1e Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-225382-m02 Clientid:01:52:54:00:8d:45:1e}
	I1016 18:23:32.948508   38435 main.go:141] libmachine: (multinode-225382-m02) DBG | domain multinode-225382-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:8d:45:1e in network mk-multinode-225382
	I1016 18:23:32.948660   38435 host.go:66] Checking if "multinode-225382-m02" exists ...
	I1016 18:23:32.948945   38435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:23:32.948986   38435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:23:32.963044   38435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44285
	I1016 18:23:32.963489   38435 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:23:32.963992   38435 main.go:141] libmachine: Using API Version  1
	I1016 18:23:32.964017   38435 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:23:32.964391   38435 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:23:32.964595   38435 main.go:141] libmachine: (multinode-225382-m02) Calling .DriverName
	I1016 18:23:32.964770   38435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:23:32.964791   38435 main.go:141] libmachine: (multinode-225382-m02) Calling .GetSSHHostname
	I1016 18:23:32.967814   38435 main.go:141] libmachine: (multinode-225382-m02) DBG | domain multinode-225382-m02 has defined MAC address 52:54:00:8d:45:1e in network mk-multinode-225382
	I1016 18:23:32.968263   38435 main.go:141] libmachine: (multinode-225382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:45:1e", ip: ""} in network mk-multinode-225382: {Iface:virbr1 ExpiryTime:2025-10-16 19:22:05 +0000 UTC Type:0 Mac:52:54:00:8d:45:1e Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-225382-m02 Clientid:01:52:54:00:8d:45:1e}
	I1016 18:23:32.968299   38435 main.go:141] libmachine: (multinode-225382-m02) DBG | domain multinode-225382-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:8d:45:1e in network mk-multinode-225382
	I1016 18:23:32.968491   38435 main.go:141] libmachine: (multinode-225382-m02) Calling .GetSSHPort
	I1016 18:23:32.968676   38435 main.go:141] libmachine: (multinode-225382-m02) Calling .GetSSHKeyPath
	I1016 18:23:32.968836   38435 main.go:141] libmachine: (multinode-225382-m02) Calling .GetSSHUsername
	I1016 18:23:32.968994   38435 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21738-8816/.minikube/machines/multinode-225382-m02/id_rsa Username:docker}
	I1016 18:23:33.051178   38435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:23:33.067393   38435 status.go:176] multinode-225382-m02 status: &{Name:multinode-225382-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1016 18:23:33.067425   38435 status.go:174] checking status of multinode-225382-m03 ...
	I1016 18:23:33.067855   38435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:23:33.067899   38435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:23:33.081535   38435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37367
	I1016 18:23:33.082080   38435 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:23:33.082495   38435 main.go:141] libmachine: Using API Version  1
	I1016 18:23:33.082521   38435 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:23:33.082895   38435 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:23:33.083093   38435 main.go:141] libmachine: (multinode-225382-m03) Calling .GetState
	I1016 18:23:33.084829   38435 status.go:371] multinode-225382-m03 host status = "Stopped" (err=<nil>)
	I1016 18:23:33.084844   38435 status.go:384] host is not running, skipping remaining checks
	I1016 18:23:33.084851   38435 status.go:176] multinode-225382-m03 status: &{Name:multinode-225382-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-225382 node start m03 -v=5 --alsologtostderr: (38.681251251s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (302.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-225382
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-225382
E1016 18:25:41.932382   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:25:57.357556   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-225382: (2m56.223648617s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-225382 --wait=true -v=5 --alsologtostderr
E1016 18:27:54.284290   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-225382 --wait=true -v=5 --alsologtostderr: (2m6.409668134s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-225382
--- PASS: TestMultiNode/serial/RestartKeepsNodes (302.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-225382 node delete m03: (2.276177237s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (173.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 stop
E1016 18:30:41.932781   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-225382 stop: (2m53.510242586s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-225382 status: exit status 7 (83.66503ms)

                                                
                                                
-- stdout --
	multinode-225382
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-225382-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-225382 status --alsologtostderr: exit status 7 (78.687239ms)

                                                
                                                
-- stdout --
	multinode-225382
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-225382-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:32:11.592714   41209 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:32:11.593246   41209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:32:11.593269   41209 out.go:374] Setting ErrFile to fd 2...
	I1016 18:32:11.593275   41209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:32:11.593752   41209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8816/.minikube/bin
	I1016 18:32:11.594253   41209 out.go:368] Setting JSON to false
	I1016 18:32:11.594292   41209 mustload.go:65] Loading cluster: multinode-225382
	I1016 18:32:11.594370   41209 notify.go:220] Checking for updates...
	I1016 18:32:11.594725   41209 config.go:182] Loaded profile config "multinode-225382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:32:11.594745   41209 status.go:174] checking status of multinode-225382 ...
	I1016 18:32:11.595246   41209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:32:11.595281   41209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:32:11.608382   41209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34255
	I1016 18:32:11.608882   41209 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:32:11.609543   41209 main.go:141] libmachine: Using API Version  1
	I1016 18:32:11.609585   41209 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:32:11.609963   41209 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:32:11.610139   41209 main.go:141] libmachine: (multinode-225382) Calling .GetState
	I1016 18:32:11.612054   41209 status.go:371] multinode-225382 host status = "Stopped" (err=<nil>)
	I1016 18:32:11.612069   41209 status.go:384] host is not running, skipping remaining checks
	I1016 18:32:11.612076   41209 status.go:176] multinode-225382 status: &{Name:multinode-225382 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 18:32:11.612109   41209 status.go:174] checking status of multinode-225382-m02 ...
	I1016 18:32:11.612450   41209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1016 18:32:11.612486   41209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1016 18:32:11.625342   41209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34715
	I1016 18:32:11.625716   41209 main.go:141] libmachine: () Calling .GetVersion
	I1016 18:32:11.626184   41209 main.go:141] libmachine: Using API Version  1
	I1016 18:32:11.626208   41209 main.go:141] libmachine: () Calling .SetConfigRaw
	I1016 18:32:11.626529   41209 main.go:141] libmachine: () Calling .GetMachineName
	I1016 18:32:11.626694   41209 main.go:141] libmachine: (multinode-225382-m02) Calling .GetState
	I1016 18:32:11.628409   41209 status.go:371] multinode-225382-m02 host status = "Stopped" (err=<nil>)
	I1016 18:32:11.628424   41209 status.go:384] host is not running, skipping remaining checks
	I1016 18:32:11.628430   41209 status.go:176] multinode-225382-m02 status: &{Name:multinode-225382-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (173.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (94.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-225382 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1016 18:32:54.282842   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:33:45.001297   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-225382 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m34.077113091s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225382 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (94.62s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-225382
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-225382-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-225382-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (61.279555ms)

                                                
                                                
-- stdout --
	* [multinode-225382-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21738-8816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-225382-m02' is duplicated with machine name 'multinode-225382-m02' in profile 'multinode-225382'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-225382-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-225382-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (38.005253218s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-225382
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-225382: exit status 80 (225.927362ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-225382 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-225382-m03 already exists in multinode-225382-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-225382-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.18s)

                                                
                                    
x
+
TestScheduledStopUnix (108.56s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-386566 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-386566 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (36.897819479s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-386566 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-386566 -n scheduled-stop-386566
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-386566 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1016 18:37:53.065331   12767 retry.go:31] will retry after 138.708µs: open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/scheduled-stop-386566/pid: no such file or directory
I1016 18:37:53.066536   12767 retry.go:31] will retry after 221.723µs: open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/scheduled-stop-386566/pid: no such file or directory
I1016 18:37:53.067717   12767 retry.go:31] will retry after 177.201µs: open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/scheduled-stop-386566/pid: no such file or directory
I1016 18:37:53.068839   12767 retry.go:31] will retry after 458.617µs: open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/scheduled-stop-386566/pid: no such file or directory
I1016 18:37:53.070010   12767 retry.go:31] will retry after 493.377µs: open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/scheduled-stop-386566/pid: no such file or directory
I1016 18:37:53.071170   12767 retry.go:31] will retry after 560.94µs: open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/scheduled-stop-386566/pid: no such file or directory
I1016 18:37:53.072322   12767 retry.go:31] will retry after 1.420362ms: open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/scheduled-stop-386566/pid: no such file or directory
I1016 18:37:53.074550   12767 retry.go:31] will retry after 1.697763ms: open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/scheduled-stop-386566/pid: no such file or directory
I1016 18:37:53.076818   12767 retry.go:31] will retry after 3.276103ms: open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/scheduled-stop-386566/pid: no such file or directory
I1016 18:37:53.081067   12767 retry.go:31] will retry after 4.923264ms: open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/scheduled-stop-386566/pid: no such file or directory
I1016 18:37:53.086296   12767 retry.go:31] will retry after 6.15822ms: open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/scheduled-stop-386566/pid: no such file or directory
I1016 18:37:53.093985   12767 retry.go:31] will retry after 9.606856ms: open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/scheduled-stop-386566/pid: no such file or directory
I1016 18:37:53.104263   12767 retry.go:31] will retry after 18.600188ms: open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/scheduled-stop-386566/pid: no such file or directory
I1016 18:37:53.123514   12767 retry.go:31] will retry after 24.156183ms: open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/scheduled-stop-386566/pid: no such file or directory
I1016 18:37:53.148810   12767 retry.go:31] will retry after 20.16608ms: open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/scheduled-stop-386566/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-386566 --cancel-scheduled
E1016 18:37:54.283369   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-386566 -n scheduled-stop-386566
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-386566
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-386566 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-386566
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-386566: exit status 7 (66.156945ms)

                                                
                                                
-- stdout --
	scheduled-stop-386566
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-386566 -n scheduled-stop-386566
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-386566 -n scheduled-stop-386566: exit status 7 (65.230378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-386566" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-386566
--- PASS: TestScheduledStopUnix (108.56s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (110.67s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1553349187 start -p running-upgrade-715574 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1553349187 start -p running-upgrade-715574 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m3.505829943s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-715574 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-715574 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.653702354s)
helpers_test.go:175: Cleaning up "running-upgrade-715574" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-715574
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-715574: (1.894308193s)
--- PASS: TestRunningBinaryUpgrade (110.67s)

                                                
                                    
x
+
TestKubernetesUpgrade (147.95s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-698479 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-698479 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m7.491973676s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-698479
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-698479: (1.842074692s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-698479 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-698479 status --format={{.Host}}: exit status 7 (71.031409ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-698479 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-698479 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (34.136018107s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-698479 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-698479 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-698479 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (84.567475ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-698479] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21738-8816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-698479
	    minikube start -p kubernetes-upgrade-698479 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6984792 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-698479 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-698479 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-698479 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (43.307322094s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-698479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-698479
--- PASS: TestKubernetesUpgrade (147.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-490378 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-490378 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (82.991119ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-490378] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21738-8816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-490378 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-490378 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m36.529795134s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-490378 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-557854 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-557854 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (100.470075ms)

                                                
                                                
-- stdout --
	* [false-557854] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21738-8816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:40:38.109613   46900 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:40:38.109946   46900 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:40:38.109962   46900 out.go:374] Setting ErrFile to fd 2...
	I1016 18:40:38.109969   46900 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:40:38.110248   46900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8816/.minikube/bin
	I1016 18:40:38.110713   46900 out.go:368] Setting JSON to false
	I1016 18:40:38.111802   46900 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4976,"bootTime":1760635062,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:40:38.111882   46900 start.go:141] virtualization: kvm guest
	I1016 18:40:38.113686   46900 out.go:179] * [false-557854] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:40:38.114915   46900 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:40:38.114918   46900 notify.go:220] Checking for updates...
	I1016 18:40:38.116861   46900 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:40:38.117965   46900 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8816/kubeconfig
	I1016 18:40:38.119111   46900 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8816/.minikube
	I1016 18:40:38.120254   46900 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:40:38.121354   46900 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:40:38.122796   46900 config.go:182] Loaded profile config "NoKubernetes-490378": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:40:38.122897   46900 config.go:182] Loaded profile config "cert-expiration-854144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:40:38.123005   46900 config.go:182] Loaded profile config "cert-options-605457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:40:38.123108   46900 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:40:38.157420   46900 out.go:179] * Using the kvm2 driver based on user configuration
	I1016 18:40:38.158675   46900 start.go:305] selected driver: kvm2
	I1016 18:40:38.158696   46900 start.go:925] validating driver "kvm2" against <nil>
	I1016 18:40:38.158712   46900 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:40:38.160715   46900 out.go:203] 
	W1016 18:40:38.161839   46900 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1016 18:40:38.163001   46900 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-557854 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-557854

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-557854

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-557854

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-557854

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-557854

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-557854

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-557854

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-557854

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-557854

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-557854

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-557854

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-557854" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-557854" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21738-8816/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 16 Oct 2025 18:40:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.67:8443
name: NoKubernetes-490378
contexts:
- context:
cluster: NoKubernetes-490378
extensions:
- extension:
last-update: Thu, 16 Oct 2025 18:40:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-490378
name: NoKubernetes-490378
current-context: NoKubernetes-490378
kind: Config
users:
- name: NoKubernetes-490378
user:
client-certificate: /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/NoKubernetes-490378/client.crt
client-key: /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/NoKubernetes-490378/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-557854

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557854"

                                                
                                                
----------------------- debugLogs end: false-557854 [took: 3.014405577s] --------------------------------
helpers_test.go:175: Cleaning up "false-557854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-557854
--- PASS: TestNetworkPlugins/group/false (3.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (32.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-490378 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-490378 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (30.77531679s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-490378 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-490378 status -o json: exit status 2 (251.562546ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-490378","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-490378
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-490378: (1.792060988s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (32.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (40.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-490378 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-490378 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.2171413s)
--- PASS: TestNoKubernetes/serial/Start (40.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-490378 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-490378 "sudo systemctl is-active --quiet service kubelet": exit status 1 (206.272231ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-490378
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-490378: (1.338142071s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (40.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-490378 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-490378 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.185457993s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (40.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E1016 18:42:37.359200   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStoppedBinaryUpgrade/Setup (2.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-490378 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-490378 "sudo systemctl is-active --quiet service kubelet": exit status 1 (203.256508ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestPause/serial/Start (70.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-050003 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-050003 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m10.858307951s)
--- PASS: TestPause/serial/Start (70.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (134.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.380325963 start -p stopped-upgrade-806110 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1016 18:42:54.283389   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.380325963 start -p stopped-upgrade-806110 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m11.573192557s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.380325963 -p stopped-upgrade-806110 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.380325963 -p stopped-upgrade-806110 stop: (1.797649228s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-806110 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-806110 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m1.143301296s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (134.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (95.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-557854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-557854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m35.970661504s)
--- PASS: TestNetworkPlugins/group/auto/Start (95.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-806110
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-806110: (1.224804992s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (62.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-557854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-557854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m2.81989361s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (62.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (94.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-557854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-557854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m34.82676234s)
--- PASS: TestNetworkPlugins/group/calico/Start (94.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (96.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-557854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-557854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m36.664045821s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (96.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-557854 "pgrep -a kubelet"
I1016 18:45:28.805864   12767 config.go:182] Loaded profile config "auto-557854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-557854 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-248wg" [70fde189-b37a-4b5b-84fe-50ad53adbfe1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-248wg" [70fde189-b37a-4b5b-84fe-50ad53adbfe1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005876343s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-557854 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-557854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-557854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (86.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-557854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-557854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m26.578212716s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (86.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-2vk74" [4082983c-12ec-4070-858b-06eb61fae938] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.129447073s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-557854 "pgrep -a kubelet"
I1016 18:46:04.583822   12767 config.go:182] Loaded profile config "kindnet-557854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-557854 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context kindnet-557854 replace --force -f testdata/netcat-deployment.yaml: (1.003332544s)
I1016 18:46:05.839552   12767 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1016 18:46:05.858009   12767 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6wzmn" [f6ab6c00-aff1-4ce7-b2ff-8bf988df9d6e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6wzmn" [f6ab6c00-aff1-4ce7-b2ff-8bf988df9d6e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005636713s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-557854 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-557854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-557854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-5fgm7" [6f237825-9833-4cd8-8f62-7df36dafd863] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-5fgm7" [6f237825-9833-4cd8-8f62-7df36dafd863] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005397963s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-557854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-557854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m14.940689793s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-557854 "pgrep -a kubelet"
I1016 18:46:37.342546   12767 config.go:182] Loaded profile config "calico-557854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-557854 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2tjrv" [c72bf6c3-7d9f-47fe-8ba0-737b8135187c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2tjrv" [c72bf6c3-7d9f-47fe-8ba0-737b8135187c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.06586442s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-557854 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-557854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-557854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-557854 "pgrep -a kubelet"
I1016 18:46:54.700407   12767 config.go:182] Loaded profile config "custom-flannel-557854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-557854 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mkskw" [43d40613-305b-45c5-a17b-b7746376236a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mkskw" [43d40613-305b-45c5-a17b-b7746376236a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.006779427s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-557854 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-557854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-557854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (64.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-557854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-557854 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m4.841958676s)
--- PASS: TestNetworkPlugins/group/bridge/Start (64.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-557854 "pgrep -a kubelet"
I1016 18:47:22.318187   12767 config.go:182] Loaded profile config "enable-default-cni-557854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-557854 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5ljvg" [683a1acb-e421-447e-85c2-53b76cb28525] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5ljvg" [683a1acb-e421-447e-85c2-53b76cb28525] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.025532292s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (65.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-242245 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-242245 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m5.897518157s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (65.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-557854 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-557854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-557854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-p6gk6" [ec0aa6d1-81ed-4832-a324-5e155261ba0d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006434042s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (78.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-064555 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1016 18:47:54.283033   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-064555 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m18.808778727s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (78.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-557854 "pgrep -a kubelet"
I1016 18:47:56.076581   12767 config.go:182] Loaded profile config "flannel-557854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (15.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-557854 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-l6f9p" [4659c8ec-b7ad-42d4-bb3c-09c49f7b81c3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-l6f9p" [4659c8ec-b7ad-42d4-bb3c-09c49f7b81c3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 15.004451195s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (15.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-557854 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-557854 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-557854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
I1016 18:48:11.756547   12767 config.go:182] Loaded profile config "bridge-557854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
net_test.go:264: (dbg) Run:  kubectl --context flannel-557854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-557854 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sz5j5" [5cff26da-e44d-4830-a93a-0ceaf3dc2de1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sz5j5" [5cff26da-e44d-4830-a93a-0ceaf3dc2de1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.006410158s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-557854 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-557854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-557854 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (60.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-715141 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-715141 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m0.213618658s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (60.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-242245 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3b749189-72c5-4f0e-851d-46876b423cda] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3b749189-72c5-4f0e-851d-46876b423cda] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.006729797s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-242245 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-483961 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-483961 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m32.178620307s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-242245 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-242245 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.2138922s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-242245 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (79.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-242245 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-242245 --alsologtostderr -v=3: (1m19.432307062s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (79.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-064555 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4b0cdf12-ab6d-4d8d-a238-901f33ddac25] Pending
helpers_test.go:352: "busybox" [4b0cdf12-ab6d-4d8d-a238-901f33ddac25] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4b0cdf12-ab6d-4d8d-a238-901f33ddac25] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.004290349s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-064555 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-064555 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-064555 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (78.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-064555 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-064555 --alsologtostderr -v=3: (1m18.942811486s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (78.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-715141 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [729514bb-35d4-4cb1-ae5c-72a74cfb45fb] Pending
helpers_test.go:352: "busybox" [729514bb-35d4-4cb1-ae5c-72a74cfb45fb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [729514bb-35d4-4cb1-ae5c-72a74cfb45fb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004771849s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-715141 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-715141 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-715141 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (80.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-715141 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-715141 --alsologtostderr -v=3: (1m20.94475046s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (80.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-242245 -n old-k8s-version-242245
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-242245 -n old-k8s-version-242245: exit status 7 (76.970676ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-242245 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-242245 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-242245 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (45.863000657s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-242245 -n old-k8s-version-242245
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-483961 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c27c8ef3-dc83-4a43-9d6a-39c2ae61cf00] Pending
helpers_test.go:352: "busybox" [c27c8ef3-dc83-4a43-9d6a-39c2ae61cf00] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c27c8ef3-dc83-4a43-9d6a-39c2ae61cf00] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004064538s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-483961 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-483961 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1016 18:50:25.002936   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-483961 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.387879098s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-483961 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (84.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-483961 --alsologtostderr -v=3
E1016 18:50:29.039887   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/auto-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:50:29.046277   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/auto-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:50:29.057703   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/auto-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:50:29.079253   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/auto-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:50:29.120723   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/auto-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:50:29.202221   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/auto-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:50:29.363755   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/auto-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:50:29.685023   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/auto-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:50:30.326513   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/auto-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:50:31.608534   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/auto-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:50:34.170237   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/auto-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:50:39.291621   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/auto-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-483961 --alsologtostderr -v=3: (1m24.23456091s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (84.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-064555 -n no-preload-064555
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-064555 -n no-preload-064555: exit status 7 (65.337528ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-064555 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (60.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-064555 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1016 18:50:41.932363   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-064555 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (59.956876588s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-064555 -n no-preload-064555
E1016 18:51:41.352602   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/calico-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (60.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-tgx8c" [42b2e251-83d1-48cb-99ae-1445c6f9e9d6] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1016 18:50:49.533940   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/auto-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-tgx8c" [42b2e251-83d1-48cb-99ae-1445c6f9e9d6] Running
E1016 18:50:58.073847   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/kindnet-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:50:58.080288   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/kindnet-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:50:58.091663   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/kindnet-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:50:58.113113   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/kindnet-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:50:58.154503   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/kindnet-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:50:58.235960   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/kindnet-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:50:58.397600   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/kindnet-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:50:58.719435   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/kindnet-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:50:59.361289   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/kindnet-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:51:00.642989   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/kindnet-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.005262478s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-715141 -n embed-certs-715141
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-715141 -n embed-certs-715141: exit status 7 (87.297449ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-715141 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-tgx8c" [42b2e251-83d1-48cb-99ae-1445c6f9e9d6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004276257s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-242245 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (45.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-715141 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1016 18:51:03.204674   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/kindnet-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-715141 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (44.191410279s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-715141 -n embed-certs-715141
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (45.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-242245 image list --format=json
E1016 18:51:08.326941   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/kindnet-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-242245 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-242245 --alsologtostderr -v=1: (1.078261391s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-242245 -n old-k8s-version-242245
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-242245 -n old-k8s-version-242245: exit status 2 (274.678948ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-242245 -n old-k8s-version-242245
E1016 18:51:10.016063   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/auto-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-242245 -n old-k8s-version-242245: exit status 2 (246.653883ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-242245 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-242245 -n old-k8s-version-242245
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-242245 -n old-k8s-version-242245
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (53.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-514561 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1016 18:51:18.568935   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/kindnet-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:51:31.099342   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/calico-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:51:31.105904   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/calico-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:51:31.117434   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/calico-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:51:31.139073   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/calico-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:51:31.180574   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/calico-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:51:31.262175   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/calico-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:51:31.423530   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/calico-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:51:31.744899   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/calico-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:51:32.387196   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/calico-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:51:33.668922   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/calico-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:51:36.231148   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/calico-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:51:39.051063   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/kindnet-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-514561 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (53.147469504s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (53.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jwhsl" [5e812534-39e2-4085-87ac-a49b02548d95] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jwhsl" [5e812534-39e2-4085-87ac-a49b02548d95] Running
E1016 18:51:50.977985   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/auto-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.182269455s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-trfzt" [dee1a419-1091-47da-b120-df9df662f7e5] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-trfzt" [dee1a419-1091-47da-b120-df9df662f7e5] Running
E1016 18:51:57.545952   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/custom-flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:00.107842   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/custom-flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.004003373s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-483961 -n default-k8s-diff-port-483961
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-483961 -n default-k8s-diff-port-483961: exit status 7 (81.756467ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-483961 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1016 18:51:51.594836   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/calico-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (41.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-483961 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1016 18:51:54.975813   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/custom-flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:51:54.982235   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/custom-flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:51:54.993703   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/custom-flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:51:55.015166   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/custom-flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:51:55.056608   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/custom-flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:51:55.138289   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/custom-flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:51:55.300247   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/custom-flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:51:55.621995   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/custom-flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-483961 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (41.119223918s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-483961 -n default-k8s-diff-port-483961
E1016 18:52:32.834646   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/enable-default-cni-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (41.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jwhsl" [5e812534-39e2-4085-87ac-a49b02548d95] Running
E1016 18:51:56.263720   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/custom-flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004995257s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-064555 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-064555 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-064555 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-064555 -n no-preload-064555
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-064555 -n no-preload-064555: exit status 2 (303.021359ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-064555 -n no-preload-064555
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-064555 -n no-preload-064555: exit status 2 (290.77933ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-064555 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-064555 -n no-preload-064555
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-064555 -n no-preload-064555
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-trfzt" [dee1a419-1091-47da-b120-df9df662f7e5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004845701s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-715141 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-514561 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-514561 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.173859253s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-514561 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-514561 --alsologtostderr -v=3: (13.845058055s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-715141 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-715141 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-715141 -n embed-certs-715141
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-715141 -n embed-certs-715141: exit status 2 (277.756274ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-715141 -n embed-certs-715141
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-715141 -n embed-certs-715141: exit status 2 (264.555156ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-715141 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-715141 -n embed-certs-715141
E1016 18:52:12.077086   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/calico-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-715141 -n embed-certs-715141
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-514561 -n newest-cni-514561
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-514561 -n newest-cni-514561: exit status 7 (75.270194ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-514561 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-514561 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1016 18:52:22.580311   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/enable-default-cni-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:22.586664   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/enable-default-cni-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:22.598065   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/enable-default-cni-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:22.619463   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/enable-default-cni-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:22.661181   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/enable-default-cni-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:22.742623   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/enable-default-cni-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:22.904186   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/enable-default-cni-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:23.226374   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/enable-default-cni-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:23.868158   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/enable-default-cni-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:25.150253   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/enable-default-cni-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:27.712556   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/enable-default-cni-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-514561 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (35.602045463s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-514561 -n newest-cni-514561
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bwgbf" [ec9ae990-c301-43f5-9f70-aee9fc0b6aa4] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1016 18:52:35.953434   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/custom-flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bwgbf" [ec9ae990-c301-43f5-9f70-aee9fc0b6aa4] Running
E1016 18:52:43.076543   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/enable-default-cni-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.004106663s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bwgbf" [ec9ae990-c301-43f5-9f70-aee9fc0b6aa4] Running
E1016 18:52:49.784695   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:49.791150   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:49.802573   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:49.823970   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:49.865465   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:49.947058   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:50.108671   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:50.430903   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:51.073055   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:52.354672   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:52:53.038726   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/calico-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004272648s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-483961 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-483961 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-483961 --alsologtostderr -v=1
E1016 18:52:54.283264   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/addons-019580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-483961 -n default-k8s-diff-port-483961
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-483961 -n default-k8s-diff-port-483961: exit status 2 (272.319343ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-483961 -n default-k8s-diff-port-483961
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-483961 -n default-k8s-diff-port-483961: exit status 2 (281.75276ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-483961 --alsologtostderr -v=1
E1016 18:52:54.915977   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-483961 -n default-k8s-diff-port-483961
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-483961 -n default-k8s-diff-port-483961
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-514561 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-514561 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-514561 --alsologtostderr -v=1: (1.574086195s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-514561 -n newest-cni-514561
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-514561 -n newest-cni-514561: exit status 2 (296.742043ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-514561 -n newest-cni-514561
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-514561 -n newest-cni-514561: exit status 2 (293.433372ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-514561 --alsologtostderr -v=1
E1016 18:53:00.037500   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/flannel-557854/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-514561 -n newest-cni-514561
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-514561 -n newest-cni-514561
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.95s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
138 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
139 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
140 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
144 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
259 TestNetworkPlugins/group/kubenet 3.02
268 TestNetworkPlugins/group/cilium 4.14
281 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019580 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-557854 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-557854

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-557854

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-557854

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-557854

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-557854

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-557854

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-557854

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-557854

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-557854

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-557854

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-557854

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-557854" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-557854" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-557854

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557854"

                                                
                                                
----------------------- debugLogs end: kubenet-557854 [took: 2.864978327s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-557854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-557854
--- SKIP: TestNetworkPlugins/group/kubenet (3.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E1016 18:40:41.932537   12767 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/functional-032307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:636: 
----------------------- debugLogs start: cilium-557854 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-557854

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-557854

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-557854

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-557854

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-557854

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-557854

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-557854

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-557854

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-557854

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-557854

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-557854

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-557854" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-557854

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-557854

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-557854

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-557854

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-557854" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-557854" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21738-8816/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 16 Oct 2025 18:40:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.67:8443
name: NoKubernetes-490378
contexts:
- context:
cluster: NoKubernetes-490378
extensions:
- extension:
last-update: Thu, 16 Oct 2025 18:40:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-490378
name: NoKubernetes-490378
current-context: NoKubernetes-490378
kind: Config
users:
- name: NoKubernetes-490378
user:
client-certificate: /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/NoKubernetes-490378/client.crt
client-key: /home/jenkins/minikube-integration/21738-8816/.minikube/profiles/NoKubernetes-490378/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-557854

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-557854" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557854"

                                                
                                                
----------------------- debugLogs end: cilium-557854 [took: 3.962698131s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-557854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-557854
--- SKIP: TestNetworkPlugins/group/cilium (4.14s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-947779" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-947779
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard