Test Report: KVM_Linux_crio 22128

                    
                      2cb2c94398211ca18cf7c1877ff6bae2d6b3d16e:2025-12-13:42756
                    
                

Test fail (3/437)

Order failed test Duration
46 TestAddons/parallel/Ingress 153.15
130 TestFunctional/parallel/ImageCommands/ImageListShort 2.47
345 TestPreload 146.65
x
+
TestAddons/parallel/Ingress (153.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-917695 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-917695 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-917695 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [01c1c75f-6820-4ed0-adec-927c0fe8b534] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [01c1c75f-6820-4ed0-adec-927c0fe8b534] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.003795786s
I1213 08:32:50.965234    9697 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-917695 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-917695 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.804116547s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-917695 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-917695 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.154
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-917695 -n addons-917695
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-917695 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-917695 logs -n 25: (1.231207741s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-433374                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-433374 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-067349 --alsologtostderr --binary-mirror http://127.0.0.1:40107 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-067349 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-067349                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-067349 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ addons  │ disable dashboard -p addons-917695                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-917695                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │                     │
	│ start   │ -p addons-917695 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:31 UTC │
	│ addons  │ addons-917695 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │ 13 Dec 25 08:31 UTC │
	│ addons  │ addons-917695 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
	│ addons  │ enable headlamp -p addons-917695 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
	│ addons  │ addons-917695 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
	│ ssh     │ addons-917695 ssh cat /opt/local-path-provisioner/pvc-e8937d4d-4320-4d8c-b491-c79dee89d1bb_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
	│ addons  │ addons-917695 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:33 UTC │
	│ ip      │ addons-917695 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
	│ addons  │ addons-917695 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
	│ addons  │ addons-917695 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
	│ addons  │ addons-917695 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
	│ addons  │ addons-917695 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
	│ ssh     │ addons-917695 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-917695                                                                                                                                                                                                                                                                                                                                                                                         │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
	│ addons  │ addons-917695 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
	│ addons  │ addons-917695 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:33 UTC │
	│ addons  │ addons-917695 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:33 UTC │ 13 Dec 25 08:33 UTC │
	│ addons  │ addons-917695 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:33 UTC │ 13 Dec 25 08:33 UTC │
	│ addons  │ addons-917695 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:33 UTC │ 13 Dec 25 08:33 UTC │
	│ ip      │ addons-917695 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-917695        │ jenkins │ v1.37.0 │ 13 Dec 25 08:35 UTC │ 13 Dec 25 08:35 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:29:43
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:29:43.910619   10611 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:29:43.910714   10611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:29:43.910718   10611 out.go:374] Setting ErrFile to fd 2...
	I1213 08:29:43.910722   10611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:29:43.910901   10611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
	I1213 08:29:43.911413   10611 out.go:368] Setting JSON to false
	I1213 08:29:43.912327   10611 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":728,"bootTime":1765613856,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:29:43.912382   10611 start.go:143] virtualization: kvm guest
	I1213 08:29:43.914633   10611 out.go:179] * [addons-917695] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 08:29:43.916044   10611 notify.go:221] Checking for updates...
	I1213 08:29:43.916100   10611 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:29:43.917514   10611 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:29:43.919038   10611 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5761/kubeconfig
	I1213 08:29:43.920410   10611 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5761/.minikube
	I1213 08:29:43.921842   10611 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 08:29:43.923389   10611 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:29:43.925026   10611 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:29:43.956336   10611 out.go:179] * Using the kvm2 driver based on user configuration
	I1213 08:29:43.957633   10611 start.go:309] selected driver: kvm2
	I1213 08:29:43.957648   10611 start.go:927] validating driver "kvm2" against <nil>
	I1213 08:29:43.957663   10611 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:29:43.958400   10611 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 08:29:43.958638   10611 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 08:29:43.958664   10611 cni.go:84] Creating CNI manager for ""
	I1213 08:29:43.958720   10611 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 08:29:43.958731   10611 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 08:29:43.958792   10611 start.go:353] cluster config:
	{Name:addons-917695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-917695 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1213 08:29:43.958908   10611 iso.go:125] acquiring lock: {Name:mk6cfae0203e3172b0791a477e21fba41da25205 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:29:43.960638   10611 out.go:179] * Starting "addons-917695" primary control-plane node in "addons-917695" cluster
	I1213 08:29:43.962221   10611 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 08:29:43.962256   10611 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 08:29:43.962278   10611 cache.go:65] Caching tarball of preloaded images
	I1213 08:29:43.962404   10611 preload.go:238] Found /home/jenkins/minikube-integration/22128-5761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 08:29:43.962419   10611 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 08:29:43.962760   10611 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/config.json ...
	I1213 08:29:43.962789   10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/config.json: {Name:mkec48c10906261e97c7f0e36ada6310ae865811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:43.962936   10611 start.go:360] acquireMachinesLock for addons-917695: {Name:mk6c8e990a56a1510f4ba4283e9407bcc2a7ff5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 08:29:43.963000   10611 start.go:364] duration metric: took 48.605µs to acquireMachinesLock for "addons-917695"
	I1213 08:29:43.963023   10611 start.go:93] Provisioning new machine with config: &{Name:addons-917695 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-917695 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 08:29:43.963095   10611 start.go:125] createHost starting for "" (driver="kvm2")
	I1213 08:29:43.964935   10611 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1213 08:29:43.965087   10611 start.go:159] libmachine.API.Create for "addons-917695" (driver="kvm2")
	I1213 08:29:43.965120   10611 client.go:173] LocalClient.Create starting
	I1213 08:29:43.965210   10611 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca.pem
	I1213 08:29:44.104919   10611 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/cert.pem
	I1213 08:29:44.172615   10611 main.go:143] libmachine: creating domain...
	I1213 08:29:44.172634   10611 main.go:143] libmachine: creating network...
	I1213 08:29:44.174134   10611 main.go:143] libmachine: found existing default network
	I1213 08:29:44.174436   10611 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 08:29:44.175000   10611 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d30c60}
	I1213 08:29:44.175105   10611 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-917695</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 08:29:44.181736   10611 main.go:143] libmachine: creating private network mk-addons-917695 192.168.39.0/24...
	I1213 08:29:44.250637   10611 main.go:143] libmachine: private network mk-addons-917695 192.168.39.0/24 created
	I1213 08:29:44.251007   10611 main.go:143] libmachine: <network>
	  <name>mk-addons-917695</name>
	  <uuid>3c545422-f55e-4a14-8933-1395b1844c41</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:a2:6a:d6'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 08:29:44.251047   10611 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695 ...
	I1213 08:29:44.251075   10611 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22128-5761/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso
	I1213 08:29:44.251083   10611 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22128-5761/.minikube
	I1213 08:29:44.251163   10611 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22128-5761/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22128-5761/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso...
	I1213 08:29:44.522355   10611 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa...
	I1213 08:29:44.601651   10611 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/addons-917695.rawdisk...
	I1213 08:29:44.601690   10611 main.go:143] libmachine: Writing magic tar header
	I1213 08:29:44.601710   10611 main.go:143] libmachine: Writing SSH key tar header
	I1213 08:29:44.601778   10611 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695 ...
	I1213 08:29:44.601833   10611 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695
	I1213 08:29:44.601862   10611 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695 (perms=drwx------)
	I1213 08:29:44.601878   10611 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22128-5761/.minikube/machines
	I1213 08:29:44.601891   10611 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22128-5761/.minikube/machines (perms=drwxr-xr-x)
	I1213 08:29:44.601906   10611 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22128-5761/.minikube
	I1213 08:29:44.601918   10611 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22128-5761/.minikube (perms=drwxr-xr-x)
	I1213 08:29:44.601927   10611 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22128-5761
	I1213 08:29:44.601937   10611 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22128-5761 (perms=drwxrwxr-x)
	I1213 08:29:44.601947   10611 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1213 08:29:44.601955   10611 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1213 08:29:44.601967   10611 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1213 08:29:44.601974   10611 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1213 08:29:44.601982   10611 main.go:143] libmachine: checking permissions on dir: /home
	I1213 08:29:44.601991   10611 main.go:143] libmachine: skipping /home - not owner
	I1213 08:29:44.601995   10611 main.go:143] libmachine: defining domain...
	I1213 08:29:44.603276   10611 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-917695</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/addons-917695.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-917695'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1213 08:29:44.611769   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:0a:f6:8b in network default
	I1213 08:29:44.612436   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:29:44.612457   10611 main.go:143] libmachine: starting domain...
	I1213 08:29:44.612461   10611 main.go:143] libmachine: ensuring networks are active...
	I1213 08:29:44.613364   10611 main.go:143] libmachine: Ensuring network default is active
	I1213 08:29:44.613802   10611 main.go:143] libmachine: Ensuring network mk-addons-917695 is active
	I1213 08:29:44.614490   10611 main.go:143] libmachine: getting domain XML...
	I1213 08:29:44.615624   10611 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-917695</name>
	  <uuid>412eefcb-63ce-429c-917f-a5530725ef67</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/addons-917695.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:4b:48:3f'/>
	      <source network='mk-addons-917695'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:0a:f6:8b'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1213 08:29:45.919686   10611 main.go:143] libmachine: waiting for domain to start...
	I1213 08:29:45.920856   10611 main.go:143] libmachine: domain is now running
	I1213 08:29:45.920875   10611 main.go:143] libmachine: waiting for IP...
	I1213 08:29:45.921671   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:29:45.922208   10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
	I1213 08:29:45.922228   10611 main.go:143] libmachine: trying to list again with source=arp
	I1213 08:29:45.922511   10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
	I1213 08:29:45.922549   10611 retry.go:31] will retry after 230.470673ms: waiting for domain to come up
	I1213 08:29:46.155199   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:29:46.155775   10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
	I1213 08:29:46.155794   10611 main.go:143] libmachine: trying to list again with source=arp
	I1213 08:29:46.156113   10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
	I1213 08:29:46.156157   10611 retry.go:31] will retry after 270.816547ms: waiting for domain to come up
	I1213 08:29:46.428940   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:29:46.429556   10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
	I1213 08:29:46.429575   10611 main.go:143] libmachine: trying to list again with source=arp
	I1213 08:29:46.429871   10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
	I1213 08:29:46.429902   10611 retry.go:31] will retry after 384.76637ms: waiting for domain to come up
	I1213 08:29:46.816564   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:29:46.817247   10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
	I1213 08:29:46.817270   10611 main.go:143] libmachine: trying to list again with source=arp
	I1213 08:29:46.817742   10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
	I1213 08:29:46.817795   10611 retry.go:31] will retry after 480.513752ms: waiting for domain to come up
	I1213 08:29:47.299921   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:29:47.300523   10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
	I1213 08:29:47.300545   10611 main.go:143] libmachine: trying to list again with source=arp
	I1213 08:29:47.300903   10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
	I1213 08:29:47.300947   10611 retry.go:31] will retry after 540.854612ms: waiting for domain to come up
	I1213 08:29:47.843431   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:29:47.843952   10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
	I1213 08:29:47.843966   10611 main.go:143] libmachine: trying to list again with source=arp
	I1213 08:29:47.844227   10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
	I1213 08:29:47.844257   10611 retry.go:31] will retry after 759.977685ms: waiting for domain to come up
	I1213 08:29:48.606416   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:29:48.606965   10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
	I1213 08:29:48.606983   10611 main.go:143] libmachine: trying to list again with source=arp
	I1213 08:29:48.607342   10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
	I1213 08:29:48.607380   10611 retry.go:31] will retry after 897.413983ms: waiting for domain to come up
	I1213 08:29:49.506692   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:29:49.507407   10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
	I1213 08:29:49.507433   10611 main.go:143] libmachine: trying to list again with source=arp
	I1213 08:29:49.507803   10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
	I1213 08:29:49.507844   10611 retry.go:31] will retry after 1.273307459s: waiting for domain to come up
	I1213 08:29:50.782431   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:29:50.783022   10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
	I1213 08:29:50.783038   10611 main.go:143] libmachine: trying to list again with source=arp
	I1213 08:29:50.783340   10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
	I1213 08:29:50.783372   10611 retry.go:31] will retry after 1.398779355s: waiting for domain to come up
	I1213 08:29:52.184072   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:29:52.184617   10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
	I1213 08:29:52.184631   10611 main.go:143] libmachine: trying to list again with source=arp
	I1213 08:29:52.184920   10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
	I1213 08:29:52.184950   10611 retry.go:31] will retry after 1.58107352s: waiting for domain to come up
	I1213 08:29:53.768449   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:29:53.769119   10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
	I1213 08:29:53.769139   10611 main.go:143] libmachine: trying to list again with source=arp
	I1213 08:29:53.769545   10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
	I1213 08:29:53.769580   10611 retry.go:31] will retry after 2.212729067s: waiting for domain to come up
	I1213 08:29:55.985080   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:29:55.985767   10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
	I1213 08:29:55.985787   10611 main.go:143] libmachine: trying to list again with source=arp
	I1213 08:29:55.986119   10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
	I1213 08:29:55.986155   10611 retry.go:31] will retry after 2.46066475s: waiting for domain to come up
	I1213 08:29:58.449742   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:29:58.450279   10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
	I1213 08:29:58.450308   10611 main.go:143] libmachine: trying to list again with source=arp
	I1213 08:29:58.450616   10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
	I1213 08:29:58.450652   10611 retry.go:31] will retry after 3.687601265s: waiting for domain to come up
	I1213 08:30:02.141825   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:02.142421   10611 main.go:143] libmachine: domain addons-917695 has current primary IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:02.142438   10611 main.go:143] libmachine: found domain IP: 192.168.39.154
	I1213 08:30:02.142446   10611 main.go:143] libmachine: reserving static IP address...
	I1213 08:30:02.143010   10611 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-917695", mac: "52:54:00:4b:48:3f", ip: "192.168.39.154"} in network mk-addons-917695
	I1213 08:30:02.345588   10611 main.go:143] libmachine: reserved static IP address 192.168.39.154 for domain addons-917695
	I1213 08:30:02.345614   10611 main.go:143] libmachine: waiting for SSH...
	I1213 08:30:02.345622   10611 main.go:143] libmachine: Getting to WaitForSSH function...
	I1213 08:30:02.349381   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:02.350030   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:02.350063   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:02.350305   10611 main.go:143] libmachine: Using SSH client type: native
	I1213 08:30:02.350527   10611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1213 08:30:02.350538   10611 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1213 08:30:02.456411   10611 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:30:02.457195   10611 main.go:143] libmachine: domain creation complete
	I1213 08:30:02.459185   10611 machine.go:94] provisionDockerMachine start ...
	I1213 08:30:02.461724   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:02.462101   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:02.462125   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:02.462321   10611 main.go:143] libmachine: Using SSH client type: native
	I1213 08:30:02.462501   10611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1213 08:30:02.462529   10611 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 08:30:02.566432   10611 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 08:30:02.566468   10611 buildroot.go:166] provisioning hostname "addons-917695"
	I1213 08:30:02.569643   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:02.570114   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:02.570138   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:02.570342   10611 main.go:143] libmachine: Using SSH client type: native
	I1213 08:30:02.570577   10611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1213 08:30:02.570590   10611 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-917695 && echo "addons-917695" | sudo tee /etc/hostname
	I1213 08:30:02.692235   10611 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-917695
	
	I1213 08:30:02.695577   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:02.696070   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:02.696096   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:02.696363   10611 main.go:143] libmachine: Using SSH client type: native
	I1213 08:30:02.696597   10611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1213 08:30:02.696616   10611 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-917695' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-917695/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-917695' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 08:30:02.809044   10611 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:30:02.809074   10611 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22128-5761/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-5761/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-5761/.minikube}
	I1213 08:30:02.809092   10611 buildroot.go:174] setting up certificates
	I1213 08:30:02.809100   10611 provision.go:84] configureAuth start
	I1213 08:30:02.811840   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:02.812347   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:02.812376   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:02.814833   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:02.815381   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:02.815409   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:02.815593   10611 provision.go:143] copyHostCerts
	I1213 08:30:02.815661   10611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-5761/.minikube/ca.pem (1078 bytes)
	I1213 08:30:02.815822   10611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-5761/.minikube/cert.pem (1123 bytes)
	I1213 08:30:02.815895   10611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-5761/.minikube/key.pem (1679 bytes)
	I1213 08:30:02.815945   10611 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-5761/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca-key.pem org=jenkins.addons-917695 san=[127.0.0.1 192.168.39.154 addons-917695 localhost minikube]
	I1213 08:30:02.971240   10611 provision.go:177] copyRemoteCerts
	I1213 08:30:02.971317   10611 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 08:30:02.974396   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:02.974755   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:02.974781   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:02.974942   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	I1213 08:30:03.058526   10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 08:30:03.089062   10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 08:30:03.119237   10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 08:30:03.148632   10611 provision.go:87] duration metric: took 339.497846ms to configureAuth
	I1213 08:30:03.148670   10611 buildroot.go:189] setting minikube options for container-runtime
	I1213 08:30:03.148887   10611 config.go:182] Loaded profile config "addons-917695": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:30:03.151380   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:03.151700   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:03.151722   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:03.151912   10611 main.go:143] libmachine: Using SSH client type: native
	I1213 08:30:03.152136   10611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1213 08:30:03.152151   10611 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 08:30:03.435285   10611 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 08:30:03.435346   10611 machine.go:97] duration metric: took 976.140868ms to provisionDockerMachine
	I1213 08:30:03.435362   10611 client.go:176] duration metric: took 19.470235648s to LocalClient.Create
	I1213 08:30:03.435379   10611 start.go:167] duration metric: took 19.47029073s to libmachine.API.Create "addons-917695"
	I1213 08:30:03.435389   10611 start.go:293] postStartSetup for "addons-917695" (driver="kvm2")
	I1213 08:30:03.435398   10611 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 08:30:03.435468   10611 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 08:30:03.438742   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:03.439250   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:03.439282   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:03.439510   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	I1213 08:30:03.522537   10611 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 08:30:03.527857   10611 info.go:137] Remote host: Buildroot 2025.02
	I1213 08:30:03.527889   10611 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5761/.minikube/addons for local assets ...
	I1213 08:30:03.527973   10611 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5761/.minikube/files for local assets ...
	I1213 08:30:03.527998   10611 start.go:296] duration metric: took 92.603951ms for postStartSetup
	I1213 08:30:03.543261   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:03.543779   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:03.543810   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:03.544052   10611 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/config.json ...
	I1213 08:30:03.565570   10611 start.go:128] duration metric: took 19.602458116s to createHost
	I1213 08:30:03.568840   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:03.569304   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:03.569334   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:03.569596   10611 main.go:143] libmachine: Using SSH client type: native
	I1213 08:30:03.569812   10611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1213 08:30:03.569825   10611 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1213 08:30:03.675083   10611 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765614603.638603400
	
	I1213 08:30:03.675110   10611 fix.go:216] guest clock: 1765614603.638603400
	I1213 08:30:03.675120   10611 fix.go:229] Guest: 2025-12-13 08:30:03.6386034 +0000 UTC Remote: 2025-12-13 08:30:03.565601791 +0000 UTC m=+19.702059265 (delta=73.001609ms)
	I1213 08:30:03.675140   10611 fix.go:200] guest clock delta is within tolerance: 73.001609ms
	I1213 08:30:03.675146   10611 start.go:83] releasing machines lock for "addons-917695", held for 19.712134993s
	I1213 08:30:03.678274   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:03.678743   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:03.678781   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:03.679388   10611 ssh_runner.go:195] Run: cat /version.json
	I1213 08:30:03.679466   10611 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 08:30:03.682446   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:03.682895   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:03.682898   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:03.682931   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:03.683093   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	I1213 08:30:03.683410   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:03.683440   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:03.683665   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	I1213 08:30:03.761124   10611 ssh_runner.go:195] Run: systemctl --version
	I1213 08:30:03.797157   10611 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 08:30:04.201179   10611 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 08:30:04.210744   10611 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 08:30:04.210831   10611 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 08:30:04.231723   10611 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 08:30:04.231749   10611 start.go:496] detecting cgroup driver to use...
	I1213 08:30:04.231822   10611 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 08:30:04.252259   10611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 08:30:04.269142   10611 docker.go:218] disabling cri-docker service (if available) ...
	I1213 08:30:04.269213   10611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 08:30:04.286696   10611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 08:30:04.303615   10611 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 08:30:04.451538   10611 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 08:30:04.668708   10611 docker.go:234] disabling docker service ...
	I1213 08:30:04.668773   10611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 08:30:04.686445   10611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 08:30:04.702125   10611 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 08:30:04.862187   10611 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 08:30:05.005473   10611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 08:30:05.022254   10611 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:30:05.045958   10611 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 08:30:05.046023   10611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:30:05.058545   10611 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 08:30:05.058613   10611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:30:05.071231   10611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:30:05.084958   10611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:30:05.098034   10611 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 08:30:05.111970   10611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:30:05.125146   10611 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:30:05.148344   10611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:30:05.162450   10611 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 08:30:05.173485   10611 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 08:30:05.173594   10611 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 08:30:05.194469   10611 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 08:30:05.206734   10611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:30:05.349757   10611 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 08:30:05.458053   10611 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 08:30:05.458136   10611 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 08:30:05.464185   10611 start.go:564] Will wait 60s for crictl version
	I1213 08:30:05.464270   10611 ssh_runner.go:195] Run: which crictl
	I1213 08:30:05.468671   10611 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 08:30:05.514193   10611 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 08:30:05.514332   10611 ssh_runner.go:195] Run: crio --version
	I1213 08:30:05.547990   10611 ssh_runner.go:195] Run: crio --version
	I1213 08:30:05.580847   10611 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1213 08:30:05.585097   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:05.585519   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:05.585542   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:05.585693   10611 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 08:30:05.590517   10611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 08:30:05.606640   10611 kubeadm.go:884] updating cluster {Name:addons-917695 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-917695 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 08:30:05.606774   10611 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 08:30:05.606839   10611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:30:05.637201   10611 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1213 08:30:05.637265   10611 ssh_runner.go:195] Run: which lz4
	I1213 08:30:05.642102   10611 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 08:30:05.646970   10611 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 08:30:05.647001   10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1213 08:30:06.862207   10611 crio.go:462] duration metric: took 1.220146055s to copy over tarball
	I1213 08:30:06.862271   10611 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 08:30:08.323918   10611 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.461612722s)
	I1213 08:30:08.323954   10611 crio.go:469] duration metric: took 1.461721609s to extract the tarball
	I1213 08:30:08.323964   10611 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 08:30:08.360231   10611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:30:08.397905   10611 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 08:30:08.397930   10611 cache_images.go:86] Images are preloaded, skipping loading
	I1213 08:30:08.397937   10611 kubeadm.go:935] updating node { 192.168.39.154 8443 v1.34.2 crio true true} ...
	I1213 08:30:08.398022   10611 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-917695 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-917695 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 08:30:08.398107   10611 ssh_runner.go:195] Run: crio config
	I1213 08:30:08.444082   10611 cni.go:84] Creating CNI manager for ""
	I1213 08:30:08.444114   10611 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 08:30:08.444144   10611 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 08:30:08.444171   10611 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.154 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-917695 NodeName:addons-917695 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 08:30:08.444344   10611 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-917695"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.154"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.154"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 08:30:08.444419   10611 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 08:30:08.456349   10611 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 08:30:08.456431   10611 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 08:30:08.468047   10611 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1213 08:30:08.488645   10611 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 08:30:08.510140   10611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1213 08:30:08.531080   10611 ssh_runner.go:195] Run: grep 192.168.39.154	control-plane.minikube.internal$ /etc/hosts
	I1213 08:30:08.535254   10611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.154	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 08:30:08.549769   10611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:30:08.692475   10611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:30:08.713922   10611 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695 for IP: 192.168.39.154
	I1213 08:30:08.713952   10611 certs.go:195] generating shared ca certs ...
	I1213 08:30:08.713972   10611 certs.go:227] acquiring lock for ca certs: {Name:mkfb64e4be02ab559f3d464592a7c41204abf76e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:30:08.714156   10611 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-5761/.minikube/ca.key
	I1213 08:30:08.791705   10611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5761/.minikube/ca.crt ...
	I1213 08:30:08.791740   10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/ca.crt: {Name:mkc8a5af04c5a9b6d079a5530dcd1e6a5fc22e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:30:08.791947   10611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5761/.minikube/ca.key ...
	I1213 08:30:08.791963   10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/ca.key: {Name:mk614c737742b97b662e74d243aaef69b1ba86df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:30:08.792046   10611 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-5761/.minikube/proxy-client-ca.key
	I1213 08:30:08.841379   10611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5761/.minikube/proxy-client-ca.crt ...
	I1213 08:30:08.841408   10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/proxy-client-ca.crt: {Name:mka883a47275da5988ed8e7035e45264ecf1ce15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:30:08.841580   10611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5761/.minikube/proxy-client-ca.key ...
	I1213 08:30:08.841591   10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/proxy-client-ca.key: {Name:mkb565d2ac71908ab3d6e138d8cfd0d1be094737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:30:08.841663   10611 certs.go:257] generating profile certs ...
	I1213 08:30:08.841722   10611 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.key
	I1213 08:30:08.841742   10611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt with IP's: []
	I1213 08:30:08.919419   10611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt ...
	I1213 08:30:08.919447   10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: {Name:mk389650a7c35b6e97d3fe3f8f8863c24b68c72f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:30:08.919649   10611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.key ...
	I1213 08:30:08.919669   10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.key: {Name:mk75631fde15dfff0a6240b3f8eab3a9c72961ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:30:08.919801   10611 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.key.8436d3b7
	I1213 08:30:08.919823   10611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.crt.8436d3b7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.154]
	I1213 08:30:08.970970   10611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.crt.8436d3b7 ...
	I1213 08:30:08.970999   10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.crt.8436d3b7: {Name:mka56b0b30da6ad22dddb23e8d79f1e2bcd283ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:30:08.971179   10611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.key.8436d3b7 ...
	I1213 08:30:08.971196   10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.key.8436d3b7: {Name:mk4527f83c536629075891d81bdbc0e535da620d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:30:08.971322   10611 certs.go:382] copying /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.crt.8436d3b7 -> /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.crt
	I1213 08:30:08.971402   10611 certs.go:386] copying /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.key.8436d3b7 -> /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.key
	I1213 08:30:08.971461   10611 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/proxy-client.key
	I1213 08:30:08.971496   10611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/proxy-client.crt with IP's: []
	I1213 08:30:09.020347   10611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/proxy-client.crt ...
	I1213 08:30:09.020377   10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/proxy-client.crt: {Name:mkfb56d2b3c725762104423dd4a518c7879e9dd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:30:09.020593   10611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/proxy-client.key ...
	I1213 08:30:09.020609   10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/proxy-client.key: {Name:mk237e900888ac8b10af5100ccf5d85988c42b40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:30:09.020821   10611 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 08:30:09.020859   10611 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca.pem (1078 bytes)
	I1213 08:30:09.020883   10611 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/cert.pem (1123 bytes)
	I1213 08:30:09.020907   10611 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/key.pem (1679 bytes)
	I1213 08:30:09.021413   10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 08:30:09.053281   10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 08:30:09.082072   10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 08:30:09.112166   10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 08:30:09.142128   10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 08:30:09.171512   10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 08:30:09.200662   10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 08:30:09.230199   10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 08:30:09.259478   10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 08:30:09.287984   10611 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 08:30:09.308129   10611 ssh_runner.go:195] Run: openssl version
	I1213 08:30:09.314861   10611 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:30:09.326923   10611 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 08:30:09.338715   10611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:30:09.344072   10611 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:30:09.344126   10611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:30:09.351482   10611 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 08:30:09.363385   10611 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 08:30:09.374721   10611 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:30:09.379453   10611 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 08:30:09.379545   10611 kubeadm.go:401] StartCluster: {Name:addons-917695 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-917695 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:30:09.379623   10611 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 08:30:09.379684   10611 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:30:09.421223   10611 cri.go:89] found id: ""
	I1213 08:30:09.421315   10611 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 08:30:09.449205   10611 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 08:30:09.465425   10611 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 08:30:09.477431   10611 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 08:30:09.477447   10611 kubeadm.go:158] found existing configuration files:
	
	I1213 08:30:09.477510   10611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 08:30:09.488492   10611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 08:30:09.488561   10611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 08:30:09.500908   10611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 08:30:09.512424   10611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 08:30:09.512502   10611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 08:30:09.525170   10611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 08:30:09.536320   10611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 08:30:09.536392   10611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 08:30:09.548489   10611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 08:30:09.559795   10611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 08:30:09.559855   10611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 08:30:09.571108   10611 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 08:30:09.619627   10611 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 08:30:09.619703   10611 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 08:30:09.720533   10611 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 08:30:09.720638   10611 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 08:30:09.720722   10611 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 08:30:09.731665   10611 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 08:30:09.734583   10611 out.go:252]   - Generating certificates and keys ...
	I1213 08:30:09.734681   10611 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 08:30:09.734741   10611 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 08:30:09.809709   10611 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 08:30:09.863545   10611 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 08:30:10.254146   10611 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 08:30:11.018012   10611 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 08:30:11.108150   10611 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 08:30:11.108328   10611 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-917695 localhost] and IPs [192.168.39.154 127.0.0.1 ::1]
	I1213 08:30:11.357906   10611 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 08:30:11.358048   10611 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-917695 localhost] and IPs [192.168.39.154 127.0.0.1 ::1]
	I1213 08:30:11.655188   10611 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 08:30:11.915820   10611 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 08:30:12.051535   10611 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 08:30:12.051796   10611 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 08:30:12.154743   10611 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 08:30:12.614925   10611 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 08:30:12.720348   10611 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 08:30:13.012433   10611 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 08:30:13.388034   10611 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 08:30:13.388149   10611 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 08:30:13.390434   10611 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 08:30:13.392322   10611 out.go:252]   - Booting up control plane ...
	I1213 08:30:13.392414   10611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 08:30:13.392493   10611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 08:30:13.393125   10611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 08:30:13.412663   10611 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 08:30:13.413014   10611 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 08:30:13.419936   10611 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 08:30:13.420147   10611 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 08:30:13.420233   10611 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 08:30:13.630049   10611 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 08:30:13.630212   10611 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 08:30:15.629709   10611 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001600303s
	I1213 08:30:15.632617   10611 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 08:30:15.633192   10611 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.154:8443/livez
	I1213 08:30:15.633373   10611 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 08:30:15.633481   10611 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 08:30:18.766310   10611 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.136117608s
	I1213 08:30:19.739473   10611 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.110139068s
	I1213 08:30:21.630241   10611 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002152899s
	I1213 08:30:21.648478   10611 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 08:30:21.663596   10611 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 08:30:21.680562   10611 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 08:30:21.680816   10611 kubeadm.go:319] [mark-control-plane] Marking the node addons-917695 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 08:30:21.697501   10611 kubeadm.go:319] [bootstrap-token] Using token: 0rhxi5.wx2cb5rdzqjx1sa0
	I1213 08:30:21.698983   10611 out.go:252]   - Configuring RBAC rules ...
	I1213 08:30:21.699117   10611 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 08:30:21.706345   10611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 08:30:21.720074   10611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 08:30:21.728448   10611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 08:30:21.735225   10611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 08:30:21.741391   10611 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 08:30:22.040499   10611 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 08:30:22.505211   10611 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 08:30:23.036185   10611 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 08:30:23.039285   10611 kubeadm.go:319] 
	I1213 08:30:23.039399   10611 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 08:30:23.039410   10611 kubeadm.go:319] 
	I1213 08:30:23.039538   10611 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 08:30:23.039605   10611 kubeadm.go:319] 
	I1213 08:30:23.039648   10611 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 08:30:23.039731   10611 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 08:30:23.039805   10611 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 08:30:23.039815   10611 kubeadm.go:319] 
	I1213 08:30:23.039868   10611 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 08:30:23.039872   10611 kubeadm.go:319] 
	I1213 08:30:23.039939   10611 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 08:30:23.039954   10611 kubeadm.go:319] 
	I1213 08:30:23.040024   10611 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 08:30:23.040142   10611 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 08:30:23.040248   10611 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 08:30:23.040258   10611 kubeadm.go:319] 
	I1213 08:30:23.040379   10611 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 08:30:23.040533   10611 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 08:30:23.040545   10611 kubeadm.go:319] 
	I1213 08:30:23.040668   10611 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0rhxi5.wx2cb5rdzqjx1sa0 \
	I1213 08:30:23.040809   10611 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2609ea5c2ad736c8675b310823db9ecbd6716e426dc88532c1b983e6f0047a99 \
	I1213 08:30:23.040867   10611 kubeadm.go:319] 	--control-plane 
	I1213 08:30:23.040884   10611 kubeadm.go:319] 
	I1213 08:30:23.040992   10611 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 08:30:23.041009   10611 kubeadm.go:319] 
	I1213 08:30:23.041117   10611 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0rhxi5.wx2cb5rdzqjx1sa0 \
	I1213 08:30:23.041278   10611 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2609ea5c2ad736c8675b310823db9ecbd6716e426dc88532c1b983e6f0047a99 
	I1213 08:30:23.042878   10611 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 08:30:23.042911   10611 cni.go:84] Creating CNI manager for ""
	I1213 08:30:23.042922   10611 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 08:30:23.045036   10611 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 08:30:23.046520   10611 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 08:30:23.060325   10611 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 08:30:23.082365   10611 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 08:30:23.082448   10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:30:23.082490   10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-917695 minikube.k8s.io/updated_at=2025_12_13T08_30_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453 minikube.k8s.io/name=addons-917695 minikube.k8s.io/primary=true
	I1213 08:30:23.231136   10611 ops.go:34] apiserver oom_adj: -16
	I1213 08:30:23.231263   10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:30:23.731931   10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:30:24.231883   10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:30:24.731461   10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:30:25.232152   10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:30:25.732336   10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:30:26.231605   10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:30:26.732244   10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:30:27.232029   10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:30:27.731782   10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:30:27.831459   10611 kubeadm.go:1114] duration metric: took 4.749058671s to wait for elevateKubeSystemPrivileges
	I1213 08:30:27.831503   10611 kubeadm.go:403] duration metric: took 18.451962979s to StartCluster
	I1213 08:30:27.831527   10611 settings.go:142] acquiring lock: {Name:mk0e8a3f7580725c20103c6ec548a6aa0dd069a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:30:27.831693   10611 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-5761/kubeconfig
	I1213 08:30:27.832392   10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/kubeconfig: {Name:mkf140a0b47414a2ed3efe0851d61f10012610de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:30:27.832632   10611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 08:30:27.832672   10611 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 08:30:27.832717   10611 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1213 08:30:27.832824   10611 addons.go:70] Setting inspektor-gadget=true in profile "addons-917695"
	I1213 08:30:27.832846   10611 addons.go:239] Setting addon inspektor-gadget=true in "addons-917695"
	I1213 08:30:27.832845   10611 addons.go:70] Setting yakd=true in profile "addons-917695"
	I1213 08:30:27.832859   10611 addons.go:239] Setting addon yakd=true in "addons-917695"
	I1213 08:30:27.832875   10611 host.go:66] Checking if "addons-917695" exists ...
	I1213 08:30:27.832879   10611 addons.go:70] Setting storage-provisioner=true in profile "addons-917695"
	I1213 08:30:27.832888   10611 addons.go:239] Setting addon storage-provisioner=true in "addons-917695"
	I1213 08:30:27.832877   10611 addons.go:70] Setting registry-creds=true in profile "addons-917695"
	I1213 08:30:27.832903   10611 host.go:66] Checking if "addons-917695" exists ...
	I1213 08:30:27.832929   10611 addons.go:239] Setting addon registry-creds=true in "addons-917695"
	I1213 08:30:27.832909   10611 addons.go:70] Setting default-storageclass=true in profile "addons-917695"
	I1213 08:30:27.832946   10611 addons.go:70] Setting volcano=true in profile "addons-917695"
	I1213 08:30:27.832962   10611 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-917695"
	I1213 08:30:27.832975   10611 addons.go:239] Setting addon volcano=true in "addons-917695"
	I1213 08:30:27.832990   10611 host.go:66] Checking if "addons-917695" exists ...
	I1213 08:30:27.832993   10611 host.go:66] Checking if "addons-917695" exists ...
	I1213 08:30:27.833023   10611 addons.go:70] Setting volumesnapshots=true in profile "addons-917695"
	I1213 08:30:27.833033   10611 addons.go:239] Setting addon volumesnapshots=true in "addons-917695"
	I1213 08:30:27.833048   10611 host.go:66] Checking if "addons-917695" exists ...
	I1213 08:30:27.833601   10611 addons.go:70] Setting cloud-spanner=true in profile "addons-917695"
	I1213 08:30:27.833636   10611 addons.go:239] Setting addon cloud-spanner=true in "addons-917695"
	I1213 08:30:27.833677   10611 host.go:66] Checking if "addons-917695" exists ...
	I1213 08:30:27.833857   10611 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-917695"
	I1213 08:30:27.833898   10611 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-917695"
	I1213 08:30:27.833933   10611 host.go:66] Checking if "addons-917695" exists ...
	I1213 08:30:27.834364   10611 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-917695"
	I1213 08:30:27.834388   10611 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-917695"
	I1213 08:30:27.834413   10611 host.go:66] Checking if "addons-917695" exists ...
	I1213 08:30:27.834460   10611 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-917695"
	I1213 08:30:27.834475   10611 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-917695"
	I1213 08:30:27.832929   10611 config.go:182] Loaded profile config "addons-917695": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:30:27.834582   10611 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-917695"
	I1213 08:30:27.834581   10611 addons.go:70] Setting registry=true in profile "addons-917695"
	I1213 08:30:27.834599   10611 addons.go:239] Setting addon registry=true in "addons-917695"
	I1213 08:30:27.834621   10611 host.go:66] Checking if "addons-917695" exists ...
	I1213 08:30:27.834637   10611 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-917695"
	I1213 08:30:27.834665   10611 host.go:66] Checking if "addons-917695" exists ...
	I1213 08:30:27.834739   10611 out.go:179] * Verifying Kubernetes components...
	I1213 08:30:27.834842   10611 addons.go:70] Setting metrics-server=true in profile "addons-917695"
	I1213 08:30:27.834859   10611 addons.go:239] Setting addon metrics-server=true in "addons-917695"
	I1213 08:30:27.834882   10611 host.go:66] Checking if "addons-917695" exists ...
	I1213 08:30:27.835266   10611 addons.go:70] Setting ingress=true in profile "addons-917695"
	I1213 08:30:27.835332   10611 addons.go:239] Setting addon ingress=true in "addons-917695"
	I1213 08:30:27.835379   10611 host.go:66] Checking if "addons-917695" exists ...
	I1213 08:30:27.835515   10611 addons.go:70] Setting ingress-dns=true in profile "addons-917695"
	I1213 08:30:27.835533   10611 addons.go:239] Setting addon ingress-dns=true in "addons-917695"
	I1213 08:30:27.835566   10611 host.go:66] Checking if "addons-917695" exists ...
	I1213 08:30:27.832875   10611 host.go:66] Checking if "addons-917695" exists ...
	I1213 08:30:27.836121   10611 addons.go:70] Setting gcp-auth=true in profile "addons-917695"
	I1213 08:30:27.836146   10611 mustload.go:66] Loading cluster: addons-917695
	I1213 08:30:27.836245   10611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:30:27.836441   10611 config.go:182] Loaded profile config "addons-917695": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	W1213 08:30:27.840784   10611 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1213 08:30:27.842176   10611 addons.go:239] Setting addon default-storageclass=true in "addons-917695"
	I1213 08:30:27.842217   10611 host.go:66] Checking if "addons-917695" exists ...
	I1213 08:30:27.842670   10611 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-917695"
	I1213 08:30:27.842714   10611 host.go:66] Checking if "addons-917695" exists ...
	I1213 08:30:27.842982   10611 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1213 08:30:27.844065   10611 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1213 08:30:27.844077   10611 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1213 08:30:27.844129   10611 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1213 08:30:27.844181   10611 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:30:27.844188   10611 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1213 08:30:27.844068   10611 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1213 08:30:27.844065   10611 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1213 08:30:27.844156   10611 host.go:66] Checking if "addons-917695" exists ...
	I1213 08:30:27.845108   10611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1213 08:30:27.845108   10611 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1213 08:30:27.845200   10611 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1213 08:30:27.845209   10611 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1213 08:30:27.845215   10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1213 08:30:27.845255   10611 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1213 08:30:27.846278   10611 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 08:30:27.846311   10611 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:30:27.846316   10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1213 08:30:27.846325   10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 08:30:27.846336   10611 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 08:30:27.846352   10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1213 08:30:27.846382   10611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1213 08:30:27.846399   10611 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1213 08:30:27.846278   10611 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1213 08:30:27.846494   10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1213 08:30:27.846281   10611 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1213 08:30:27.845490   10611 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 08:30:27.846639   10611 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 08:30:27.846671   10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1213 08:30:27.846700   10611 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 08:30:27.847012   10611 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 08:30:27.847028   10611 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 08:30:27.848039   10611 out.go:179]   - Using image docker.io/registry:3.0.0
	I1213 08:30:27.848066   10611 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1213 08:30:27.848041   10611 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1213 08:30:27.848527   10611 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1213 08:30:27.848110   10611 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 08:30:27.848595   10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1213 08:30:27.848866   10611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1213 08:30:27.849928   10611 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1213 08:30:27.849944   10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1213 08:30:27.850838   10611 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 08:30:27.850838   10611 out.go:179]   - Using image docker.io/busybox:stable
	I1213 08:30:27.852241   10611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1213 08:30:27.852329   10611 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 08:30:27.852344   10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1213 08:30:27.853494   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.853692   10611 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 08:30:27.855113   10611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1213 08:30:27.855315   10611 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 08:30:27.855332   10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1213 08:30:27.855594   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:27.855641   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.856678   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	I1213 08:30:27.857649   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.857902   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.859011   10611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1213 08:30:27.860398   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.861207   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:27.861247   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.861492   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.861877   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.861975   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.862184   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	I1213 08:30:27.862280   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:27.862331   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.862386   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.863319   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:27.863355   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.863355   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	I1213 08:30:27.863693   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.863713   10611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1213 08:30:27.863936   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:27.863965   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.863962   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	I1213 08:30:27.864047   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.864447   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:27.864490   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.864501   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:27.864529   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.864660   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:27.864686   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.864684   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	I1213 08:30:27.864718   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.864863   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.865125   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	I1213 08:30:27.865168   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	I1213 08:30:27.865181   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	I1213 08:30:27.865549   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:27.865584   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.865636   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:27.865669   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.865907   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:27.865938   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.865945   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:27.865944   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	I1213 08:30:27.865974   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.865990   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	I1213 08:30:27.866400   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	I1213 08:30:27.866409   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	I1213 08:30:27.866789   10611 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1213 08:30:27.867429   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.867754   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.867796   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:27.867825   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.867976   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	I1213 08:30:27.868284   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:27.868325   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.868514   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	I1213 08:30:27.869696   10611 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1213 08:30:27.871007   10611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1213 08:30:27.871019   10611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1213 08:30:27.873775   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.874134   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:27.874154   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:27.874301   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	W1213 08:30:28.143257   10611 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47486->192.168.39.154:22: read: connection reset by peer
	I1213 08:30:28.143302   10611 retry.go:31] will retry after 292.01934ms: ssh: handshake failed: read tcp 192.168.39.1:47486->192.168.39.154:22: read: connection reset by peer
	W1213 08:30:28.170403   10611 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47510->192.168.39.154:22: read: connection reset by peer
	I1213 08:30:28.170429   10611 retry.go:31] will retry after 182.548903ms: ssh: handshake failed: read tcp 192.168.39.1:47510->192.168.39.154:22: read: connection reset by peer
	I1213 08:30:28.605934   10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 08:30:28.706092   10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 08:30:28.713812   10611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 08:30:28.713857   10611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:30:28.780116   10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:30:28.811347   10611 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1213 08:30:28.811376   10611 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1213 08:30:28.846273   10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1213 08:30:28.846390   10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 08:30:28.861107   10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1213 08:30:28.862968   10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 08:30:28.889213   10611 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 08:30:28.889247   10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1213 08:30:28.907404   10611 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1213 08:30:28.907434   10611 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1213 08:30:28.970415   10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 08:30:29.016064   10611 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1213 08:30:29.016090   10611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1213 08:30:29.028823   10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:30:29.566519   10611 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1213 08:30:29.566547   10611 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1213 08:30:29.601129   10611 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 08:30:29.601161   10611 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 08:30:29.678813   10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 08:30:29.691770   10611 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1213 08:30:29.691800   10611 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1213 08:30:29.696107   10611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1213 08:30:29.696137   10611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1213 08:30:29.714392   10611 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1213 08:30:29.714419   10611 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1213 08:30:29.866360   10611 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1213 08:30:29.866380   10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1213 08:30:30.047200   10611 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1213 08:30:30.047222   10611 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1213 08:30:30.073353   10611 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 08:30:30.073378   10611 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 08:30:30.112783   10611 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1213 08:30:30.112817   10611 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1213 08:30:30.206035   10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1213 08:30:30.234133   10611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1213 08:30:30.234166   10611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1213 08:30:30.361924   10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 08:30:30.370167   10611 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1213 08:30:30.370190   10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1213 08:30:30.386060   10611 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1213 08:30:30.386092   10611 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1213 08:30:30.603334   10611 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 08:30:30.603358   10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1213 08:30:30.612317   10611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1213 08:30:30.612346   10611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1213 08:30:30.810012   10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.204040098s)
	I1213 08:30:30.895573   10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 08:30:30.895727   10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1213 08:30:31.143744   10611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1213 08:30:31.143768   10611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1213 08:30:31.575253   10611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1213 08:30:31.575279   10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1213 08:30:32.006873   10611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1213 08:30:32.006912   10611 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1213 08:30:32.462084   10611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1213 08:30:32.462113   10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1213 08:30:32.757623   10611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1213 08:30:32.757644   10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1213 08:30:32.976567   10611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 08:30:32.976590   10611 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1213 08:30:33.241117   10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 08:30:34.996044   10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.289914118s)
	I1213 08:30:34.996109   10611 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.282223771s)
	I1213 08:30:34.996174   10611 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.282316729s)
	I1213 08:30:34.996197   10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.216049923s)
	I1213 08:30:34.996200   10611 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1213 08:30:34.996261   10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.149942576s)
	I1213 08:30:34.996307   10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.149873382s)
	I1213 08:30:34.996882   10611 node_ready.go:35] waiting up to 6m0s for node "addons-917695" to be "Ready" ...
	I1213 08:30:35.079473   10611 node_ready.go:49] node "addons-917695" is "Ready"
	I1213 08:30:35.079499   10611 node_ready.go:38] duration metric: took 82.598207ms for node "addons-917695" to be "Ready" ...
	I1213 08:30:35.079511   10611 api_server.go:52] waiting for apiserver process to appear ...
	I1213 08:30:35.079561   10611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 08:30:35.103922   10611 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1213 08:30:35.139952   10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (6.278807837s)
	I1213 08:30:35.140036   10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.277030475s)
	I1213 08:30:35.140089   10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.169648644s)
	I1213 08:30:35.140165   10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.111311445s)
	I1213 08:30:35.302935   10611 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1213 08:30:35.305996   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:35.306574   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:35.306607   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:35.306854   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	I1213 08:30:35.522419   10611 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-917695" context rescaled to 1 replicas
	I1213 08:30:35.868679   10611 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1213 08:30:36.106606   10611 addons.go:239] Setting addon gcp-auth=true in "addons-917695"
	I1213 08:30:36.106665   10611 host.go:66] Checking if "addons-917695" exists ...
	I1213 08:30:36.108415   10611 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1213 08:30:36.110832   10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:36.111344   10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
	I1213 08:30:36.111368   10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
	I1213 08:30:36.111588   10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
	I1213 08:30:37.057706   10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.378852905s)
	I1213 08:30:37.057748   10611 addons.go:495] Verifying addon ingress=true in "addons-917695"
	I1213 08:30:37.057792   10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.851713278s)
	I1213 08:30:37.057821   10611 addons.go:495] Verifying addon registry=true in "addons-917695"
	I1213 08:30:37.057897   10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.695938221s)
	I1213 08:30:37.057959   10611 addons.go:495] Verifying addon metrics-server=true in "addons-917695"
	I1213 08:30:37.059695   10611 out.go:179] * Verifying ingress addon...
	I1213 08:30:37.059786   10611 out.go:179] * Verifying registry addon...
	I1213 08:30:37.061469   10611 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1213 08:30:37.061767   10611 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1213 08:30:37.101312   10611 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 08:30:37.101340   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:37.126631   10611 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 08:30:37.126653   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:37.240080   10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.344457567s)
	W1213 08:30:37.240126   10611 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 08:30:37.240155   10611 retry.go:31] will retry after 293.996062ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 08:30:37.240168   10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.344410602s)
	I1213 08:30:37.242252   10611 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-917695 service yakd-dashboard -n yakd-dashboard
	
	I1213 08:30:37.535086   10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 08:30:37.573128   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:37.574464   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:38.085030   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:38.085084   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:38.474579   10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.233420261s)
	I1213 08:30:38.474612   10611 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.395031039s)
	I1213 08:30:38.474621   10611 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-917695"
	I1213 08:30:38.474644   10611 api_server.go:72] duration metric: took 10.64193603s to wait for apiserver process to appear ...
	I1213 08:30:38.474653   10611 api_server.go:88] waiting for apiserver healthz status ...
	I1213 08:30:38.474675   10611 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	I1213 08:30:38.474716   10611 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.366277621s)
	I1213 08:30:38.476166   10611 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1213 08:30:38.476235   10611 out.go:179] * Verifying csi-hostpath-driver addon...
	I1213 08:30:38.477625   10611 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 08:30:38.478133   10611 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1213 08:30:38.478932   10611 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1213 08:30:38.478951   10611 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1213 08:30:38.492937   10611 api_server.go:279] https://192.168.39.154:8443/healthz returned 200:
	ok
	I1213 08:30:38.496688   10611 api_server.go:141] control plane version: v1.34.2
	I1213 08:30:38.496715   10611 api_server.go:131] duration metric: took 22.053346ms to wait for apiserver health ...
	I1213 08:30:38.496725   10611 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 08:30:38.509387   10611 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 08:30:38.509414   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:38.522447   10611 system_pods.go:59] 20 kube-system pods found
	I1213 08:30:38.522490   10611 system_pods.go:61] "amd-gpu-device-plugin-fv8qk" [06ada580-f960-46ba-a686-1cf02b573962] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 08:30:38.522503   10611 system_pods.go:61] "coredns-66bc5c9577-jvg44" [43d6b098-f87e-4c86-add2-0ce65ebcd7e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 08:30:38.522513   10611 system_pods.go:61] "coredns-66bc5c9577-qk82t" [98132a09-ca4a-4070-b715-3def082d8cd1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 08:30:38.522529   10611 system_pods.go:61] "csi-hostpath-attacher-0" [4b1955f9-87f7-4de4-ad2c-e76d9fab8492] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 08:30:38.522540   10611 system_pods.go:61] "csi-hostpath-resizer-0" [f666bb51-66c3-4c9e-8d61-f94da690978e] Pending
	I1213 08:30:38.522550   10611 system_pods.go:61] "csi-hostpathplugin-gxqlr" [5248a1ff-c04b-4388-952b-7ba796fd30e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 08:30:38.522560   10611 system_pods.go:61] "etcd-addons-917695" [85b5d74e-ba25-4520-83c0-4ce3b36b0a68] Running
	I1213 08:30:38.522567   10611 system_pods.go:61] "kube-apiserver-addons-917695" [e928775e-45e6-48d0-ae6d-fa836392080b] Running
	I1213 08:30:38.522573   10611 system_pods.go:61] "kube-controller-manager-addons-917695" [e3944cf7-0b72-4719-90e0-a1a5a32b41fb] Running
	I1213 08:30:38.522581   10611 system_pods.go:61] "kube-ingress-dns-minikube" [40a1c68c-2c20-480c-9339-6eeb11a0e5d4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 08:30:38.522587   10611 system_pods.go:61] "kube-proxy-t9crl" [b50a42b7-5b85-4440-b27c-f3a2376cdfac] Running
	I1213 08:30:38.522593   10611 system_pods.go:61] "kube-scheduler-addons-917695" [bedc314b-a5cd-4697-917b-a4ebc62ca5f1] Running
	I1213 08:30:38.522601   10611 system_pods.go:61] "metrics-server-85b7d694d7-txm49" [b0c671da-5ff1-4882-b011-4feddd170742] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 08:30:38.522612   10611 system_pods.go:61] "nvidia-device-plugin-daemonset-fc667" [3cb5ce62-9820-4ff4-a96c-d1dd68c20667] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 08:30:38.522628   10611 system_pods.go:61] "registry-6b586f9694-jk6nh" [5b9cee4c-b367-49f4-bc49-497edd267414] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 08:30:38.522637   10611 system_pods.go:61] "registry-creds-764b6fb674-rcrdr" [b6c2f09d-b53b-43a4-99f0-a69adbf0ff6b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 08:30:38.522644   10611 system_pods.go:61] "registry-proxy-6svfh" [64d7a435-6506-4bba-a294-e2111eee1c24] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 08:30:38.522653   10611 system_pods.go:61] "snapshot-controller-7d9fbc56b8-877d8" [c6688aaa-a34c-4ad0-8f0d-2d2100bd7a6a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:38.522661   10611 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pxvwz" [81346d9c-2e81-4f84-9f88-574efa1f58c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:38.522674   10611 system_pods.go:61] "storage-provisioner" [f88dd7f0-f94c-48ca-a7b0-7461dc3a2e16] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 08:30:38.522687   10611 system_pods.go:74] duration metric: took 25.952997ms to wait for pod list to return data ...
	I1213 08:30:38.522699   10611 default_sa.go:34] waiting for default service account to be created ...
	I1213 08:30:38.573405   10611 default_sa.go:45] found service account: "default"
	I1213 08:30:38.573431   10611 default_sa.go:55] duration metric: took 50.72468ms for default service account to be created ...
	I1213 08:30:38.573442   10611 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 08:30:38.578981   10611 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1213 08:30:38.579003   10611 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1213 08:30:38.627379   10611 system_pods.go:86] 20 kube-system pods found
	I1213 08:30:38.627408   10611 system_pods.go:89] "amd-gpu-device-plugin-fv8qk" [06ada580-f960-46ba-a686-1cf02b573962] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 08:30:38.627414   10611 system_pods.go:89] "coredns-66bc5c9577-jvg44" [43d6b098-f87e-4c86-add2-0ce65ebcd7e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 08:30:38.627424   10611 system_pods.go:89] "coredns-66bc5c9577-qk82t" [98132a09-ca4a-4070-b715-3def082d8cd1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 08:30:38.627433   10611 system_pods.go:89] "csi-hostpath-attacher-0" [4b1955f9-87f7-4de4-ad2c-e76d9fab8492] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 08:30:38.627441   10611 system_pods.go:89] "csi-hostpath-resizer-0" [f666bb51-66c3-4c9e-8d61-f94da690978e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 08:30:38.627453   10611 system_pods.go:89] "csi-hostpathplugin-gxqlr" [5248a1ff-c04b-4388-952b-7ba796fd30e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 08:30:38.627459   10611 system_pods.go:89] "etcd-addons-917695" [85b5d74e-ba25-4520-83c0-4ce3b36b0a68] Running
	I1213 08:30:38.627465   10611 system_pods.go:89] "kube-apiserver-addons-917695" [e928775e-45e6-48d0-ae6d-fa836392080b] Running
	I1213 08:30:38.627472   10611 system_pods.go:89] "kube-controller-manager-addons-917695" [e3944cf7-0b72-4719-90e0-a1a5a32b41fb] Running
	I1213 08:30:38.627480   10611 system_pods.go:89] "kube-ingress-dns-minikube" [40a1c68c-2c20-480c-9339-6eeb11a0e5d4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 08:30:38.627488   10611 system_pods.go:89] "kube-proxy-t9crl" [b50a42b7-5b85-4440-b27c-f3a2376cdfac] Running
	I1213 08:30:38.627492   10611 system_pods.go:89] "kube-scheduler-addons-917695" [bedc314b-a5cd-4697-917b-a4ebc62ca5f1] Running
	I1213 08:30:38.627497   10611 system_pods.go:89] "metrics-server-85b7d694d7-txm49" [b0c671da-5ff1-4882-b011-4feddd170742] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 08:30:38.627503   10611 system_pods.go:89] "nvidia-device-plugin-daemonset-fc667" [3cb5ce62-9820-4ff4-a96c-d1dd68c20667] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 08:30:38.627517   10611 system_pods.go:89] "registry-6b586f9694-jk6nh" [5b9cee4c-b367-49f4-bc49-497edd267414] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 08:30:38.627523   10611 system_pods.go:89] "registry-creds-764b6fb674-rcrdr" [b6c2f09d-b53b-43a4-99f0-a69adbf0ff6b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 08:30:38.627532   10611 system_pods.go:89] "registry-proxy-6svfh" [64d7a435-6506-4bba-a294-e2111eee1c24] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 08:30:38.627540   10611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-877d8" [c6688aaa-a34c-4ad0-8f0d-2d2100bd7a6a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:38.627567   10611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pxvwz" [81346d9c-2e81-4f84-9f88-574efa1f58c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:38.627573   10611 system_pods.go:89] "storage-provisioner" [f88dd7f0-f94c-48ca-a7b0-7461dc3a2e16] Running
	I1213 08:30:38.627592   10611 system_pods.go:126] duration metric: took 54.141518ms to wait for k8s-apps to be running ...
	I1213 08:30:38.627603   10611 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 08:30:38.627647   10611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 08:30:38.628744   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:38.629867   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:38.683991   10611 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 08:30:38.684013   10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1213 08:30:38.735669   10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 08:30:38.985020   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:39.066466   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:39.068482   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:39.487093   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:39.527241   10611 system_svc.go:56] duration metric: took 899.630364ms WaitForService to wait for kubelet
	I1213 08:30:39.527307   10611 kubeadm.go:587] duration metric: took 11.694578855s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 08:30:39.527335   10611 node_conditions.go:102] verifying NodePressure condition ...
	I1213 08:30:39.527240   10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.992103835s)
	I1213 08:30:39.537667   10611 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 08:30:39.537708   10611 node_conditions.go:123] node cpu capacity is 2
	I1213 08:30:39.537738   10611 node_conditions.go:105] duration metric: took 10.394784ms to run NodePressure ...
	I1213 08:30:39.537756   10611 start.go:242] waiting for startup goroutines ...
	I1213 08:30:39.587064   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:39.587066   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:39.995172   10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.259463016s)
	I1213 08:30:39.996265   10611 addons.go:495] Verifying addon gcp-auth=true in "addons-917695"
	I1213 08:30:39.998188   10611 out.go:179] * Verifying gcp-auth addon...
	I1213 08:30:39.999985   10611 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1213 08:30:40.027600   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:40.035507   10611 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1213 08:30:40.035533   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:40.090127   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:40.090136   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:40.486206   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:40.506195   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:40.588558   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:40.588561   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:40.987917   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:41.004033   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:41.065804   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:41.072087   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:41.482188   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:41.503673   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:41.583798   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:41.583845   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:41.982171   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:42.004790   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:42.064800   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:42.065348   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:42.481975   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:42.507307   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:42.566009   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:42.566786   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:42.983612   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:43.004320   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:43.066720   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:43.066842   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:43.485235   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:43.504059   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:43.566350   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:43.566646   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:43.983760   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:44.003478   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:44.066839   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:44.069548   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:44.483653   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:44.504698   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:44.575195   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:44.576671   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:44.985141   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:45.004831   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:45.066718   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:45.068134   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:45.481875   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:45.505488   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:45.569783   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:45.569957   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:45.983598   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:46.004267   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:46.067338   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:46.067998   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:46.483364   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:46.504929   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:46.566067   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:46.566133   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:46.982632   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:47.003995   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:47.065575   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:47.066331   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:47.482028   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:47.504276   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:47.565995   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:47.570236   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:47.983080   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:48.004173   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:48.065674   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:48.066225   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:48.481715   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:48.504117   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:48.566360   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:48.566494   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:48.983054   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:49.004063   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:49.066552   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:49.066661   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:49.482261   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:49.506234   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:49.566094   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:49.567065   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:49.982500   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:50.004762   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:50.067543   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:50.070725   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:50.485246   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:50.506370   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:50.567193   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:50.570574   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:50.984933   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:51.005284   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:51.068498   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:51.068552   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:51.483892   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:51.504813   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:51.565719   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:51.565904   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:51.983599   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:52.003442   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:52.066534   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:52.069453   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:52.483144   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:52.507752   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:52.566324   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:52.567101   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:52.982564   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:53.004002   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:53.065321   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:53.065459   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:53.482463   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:53.503596   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:53.564485   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:53.564804   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:53.982349   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:54.003003   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:54.065641   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:54.065867   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:54.482160   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:54.502913   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:54.565404   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:54.566126   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:54.981984   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:55.003993   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:55.065909   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:55.066196   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:55.481440   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:55.503968   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:55.568099   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:55.569080   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:55.982573   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:56.086999   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:56.087022   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:56.087320   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:56.482389   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:56.504224   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:56.566814   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:56.567989   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:56.981165   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:57.003917   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:57.068755   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:57.068924   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:57.482602   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:57.504712   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:57.569162   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:57.569393   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:57.982981   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:58.005162   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:58.066673   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:58.067901   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:58.484819   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:58.503404   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:58.573104   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:58.576347   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:58.985743   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:59.006675   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:59.067032   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:59.070093   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:59.482522   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:59.505481   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:59.564929   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:59.566071   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:00.027593   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:00.027683   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:00.065747   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:00.066341   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:00.485895   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:00.505211   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:00.574016   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:00.578810   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:00.987859   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:01.007577   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:01.067885   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:01.069095   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:01.485980   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:01.506041   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:01.567201   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:01.568749   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:01.985893   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:02.005672   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:02.086304   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:02.086512   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:02.482027   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:02.504371   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:02.568384   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:02.570259   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:02.983362   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:03.003559   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:03.066128   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:03.067808   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:03.482670   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:03.504831   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:03.566182   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:03.567321   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:03.982117   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:04.004267   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:04.065483   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:04.065641   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:04.482255   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:04.503456   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:04.568521   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:04.568695   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:04.985466   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:05.006320   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:05.068385   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:05.068844   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:05.483097   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:05.503787   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:05.565025   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:05.565718   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:05.984902   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:06.004451   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:06.067876   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:06.068887   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:06.483185   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:06.502858   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:06.564992   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:06.565041   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:06.981882   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:07.004161   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:07.065144   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:07.065861   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:07.482421   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:07.503173   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:07.565878   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:07.565989   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:07.982089   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:08.004129   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:08.065843   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:08.066433   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:08.482960   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:08.505187   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:08.566262   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:08.567815   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:08.984431   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:09.004879   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:09.068864   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:09.069190   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:09.481840   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:09.505657   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:09.568467   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:09.571000   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:09.982185   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:10.004144   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:10.065616   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:10.066736   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:10.486997   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:10.504172   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:10.586870   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:10.587106   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:10.982510   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:11.003803   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:11.066836   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:31:11.067019   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:11.482118   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:11.503087   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:11.564901   10611 kapi.go:107] duration metric: took 34.503134342s to wait for kubernetes.io/minikube-addons=registry ...
	I1213 08:31:11.565276   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:11.982306   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:12.003254   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:12.083728   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:12.483016   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:12.504548   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:12.565470   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:12.984330   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:13.004830   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:13.067456   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:13.482230   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:13.503583   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:13.566449   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:13.983020   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:14.004213   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:14.066226   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:14.485933   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:14.504012   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:14.567765   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:14.986414   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:15.006901   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:15.069960   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:15.481506   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:15.503325   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:15.565504   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:15.983162   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:16.004410   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:16.065974   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:16.481658   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:16.582490   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:16.582554   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:16.984186   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:17.004901   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:17.067389   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:17.482049   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:17.506112   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:17.566534   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:17.985168   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:18.012784   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:18.066929   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:18.483903   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:18.503902   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:18.570104   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:18.982724   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:19.003956   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:19.066246   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:19.482105   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:19.514032   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:19.568435   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:19.982969   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:20.004472   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:20.066897   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:20.484904   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:20.508261   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:20.616095   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:20.983561   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:21.003590   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:21.065723   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:21.676851   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:21.677183   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:21.678514   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:21.983399   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:22.003767   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:22.065524   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:22.483554   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:22.504872   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:22.572686   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:22.985539   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:23.003546   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:23.065819   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:23.482887   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:23.503886   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:23.588566   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:23.985080   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:24.006108   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:24.067174   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:24.485097   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:24.503706   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:24.569361   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:24.986780   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:25.005115   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:25.086009   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:25.483731   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:25.504727   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:25.567565   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:26.002117   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:26.006914   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:26.066871   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:26.483107   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:26.506027   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:26.565692   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:26.982913   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:27.004024   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:27.067242   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:27.485749   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:27.503569   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:27.570712   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:27.984047   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:28.004863   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:28.065251   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:28.482935   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:28.503394   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:28.582973   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:28.982508   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:29.007042   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:29.065664   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:29.484670   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:29.505670   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:29.564830   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:29.982818   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:30.003837   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:30.065846   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:30.488532   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:30.504498   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:30.568722   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:30.982928   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:31.003705   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:31.065081   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:31.481834   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:31.505587   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:31.567532   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:31.984769   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:32.004156   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:32.069464   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:32.487859   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:32.508318   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:32.568384   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:32.989950   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:33.007150   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:33.066004   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:33.482278   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:33.504674   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:33.565557   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:33.983527   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:34.005459   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:34.070351   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:34.483494   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:34.506654   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:34.565852   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:34.983973   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:35.004047   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:35.068743   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:35.484138   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:35.503800   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:35.566157   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:35.987763   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:36.005793   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:36.094913   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:36.484634   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:36.507267   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:36.566351   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:36.983988   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:37.005299   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:37.069139   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:37.482671   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:37.503705   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:37.565323   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:37.984077   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:38.003950   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:38.067904   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:38.487006   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:38.505982   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:38.567888   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:39.009660   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:39.009837   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:39.068484   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:39.482398   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:39.502917   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:39.582813   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:39.987192   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:40.012946   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:40.187826   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:40.483684   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:40.503677   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:40.564892   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:40.985214   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:41.004344   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:41.068490   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:41.482798   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:41.508898   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:41.569680   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:41.984648   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:42.003345   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:42.071211   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:42.484375   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:42.503285   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:42.567312   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:42.985228   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:43.004314   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:43.066178   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:43.481328   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:43.506803   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:43.568450   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:43.983227   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:44.004136   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:44.067780   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:44.485793   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:44.584848   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:44.585007   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:44.990775   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:45.009358   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:45.091871   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:45.484910   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:45.505062   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:45.568693   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:45.982807   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:46.004366   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:46.067811   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:46.483959   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:46.504651   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:46.565454   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:46.983387   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:47.007241   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:47.066960   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:47.481462   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:47.503386   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:47.567171   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:47.982363   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:48.003382   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:48.065591   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:48.483695   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:48.504325   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:48.568833   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:48.982407   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:49.007093   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:49.083771   10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:49.486163   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:49.586071   10611 kapi.go:107] duration metric: took 1m12.524599352s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1213 08:31:49.587169   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:49.983009   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:50.003792   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:50.483676   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:50.504540   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:50.986538   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:51.087881   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:51.482411   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:31:51.502953   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:51.982182   10611 kapi.go:107] duration metric: took 1m13.504044327s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1213 08:31:52.002717   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:52.503253   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:53.004951   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:53.505590   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:54.007819   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:54.505381   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:55.006248   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:55.504378   10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:31:56.004727   10611 kapi.go:107] duration metric: took 1m16.004739814s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1213 08:31:56.006695   10611 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-917695 cluster.
	I1213 08:31:56.008309   10611 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1213 08:31:56.009790   10611 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1213 08:31:56.011544   10611 out.go:179] * Enabled addons: registry-creds, cloud-spanner, amd-gpu-device-plugin, inspektor-gadget, ingress-dns, nvidia-device-plugin, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1213 08:31:56.013091   10611 addons.go:530] duration metric: took 1m28.180365124s for enable addons: enabled=[registry-creds cloud-spanner amd-gpu-device-plugin inspektor-gadget ingress-dns nvidia-device-plugin storage-provisioner storage-provisioner-rancher metrics-server yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1213 08:31:56.013143   10611 start.go:247] waiting for cluster config update ...
	I1213 08:31:56.013177   10611 start.go:256] writing updated cluster config ...
	I1213 08:31:56.013467   10611 ssh_runner.go:195] Run: rm -f paused
	I1213 08:31:56.021079   10611 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 08:31:56.024741   10611 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qk82t" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:56.031050   10611 pod_ready.go:94] pod "coredns-66bc5c9577-qk82t" is "Ready"
	I1213 08:31:56.031078   10611 pod_ready.go:86] duration metric: took 6.311424ms for pod "coredns-66bc5c9577-qk82t" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:56.034031   10611 pod_ready.go:83] waiting for pod "etcd-addons-917695" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:56.040587   10611 pod_ready.go:94] pod "etcd-addons-917695" is "Ready"
	I1213 08:31:56.040611   10611 pod_ready.go:86] duration metric: took 6.557647ms for pod "etcd-addons-917695" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:56.043032   10611 pod_ready.go:83] waiting for pod "kube-apiserver-addons-917695" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:56.047769   10611 pod_ready.go:94] pod "kube-apiserver-addons-917695" is "Ready"
	I1213 08:31:56.047792   10611 pod_ready.go:86] duration metric: took 4.739875ms for pod "kube-apiserver-addons-917695" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:56.050486   10611 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-917695" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:56.425867   10611 pod_ready.go:94] pod "kube-controller-manager-addons-917695" is "Ready"
	I1213 08:31:56.425899   10611 pod_ready.go:86] duration metric: took 375.37084ms for pod "kube-controller-manager-addons-917695" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:56.625569   10611 pod_ready.go:83] waiting for pod "kube-proxy-t9crl" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:57.025548   10611 pod_ready.go:94] pod "kube-proxy-t9crl" is "Ready"
	I1213 08:31:57.025574   10611 pod_ready.go:86] duration metric: took 399.982799ms for pod "kube-proxy-t9crl" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:57.225609   10611 pod_ready.go:83] waiting for pod "kube-scheduler-addons-917695" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:57.628536   10611 pod_ready.go:94] pod "kube-scheduler-addons-917695" is "Ready"
	I1213 08:31:57.628564   10611 pod_ready.go:86] duration metric: took 402.924944ms for pod "kube-scheduler-addons-917695" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:57.628575   10611 pod_ready.go:40] duration metric: took 1.607467659s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 08:31:57.672484   10611 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 08:31:57.674528   10611 out.go:179] * Done! kubectl is now configured to use "addons-917695" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.047553995Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765614906047524137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:546838,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21c3845d-3cfe-4cb7-9cd0-6a64f4274f7e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.048686936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=435f0f49-a32c-4a1d-a356-c8c1b1dc3a54 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.048859936Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=435f0f49-a32c-4a1d-a356-c8c1b1dc3a54 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.049203162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f6d4159bb313b29b848d2a01545e2bd972786fdba122275f9cf4a27684260fc,PodSandboxId:68a2d4cc7bf7c1e50e10d7b1c4038ef53b43a7696c313896c584514f02490911,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765614764509897600,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01c1c75f-6820-4ed0-adec-927c0fe8b534,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a463f2fb926733e1d47efa227c43a1d469c24d567b16b61d48151c6df2d0dbc,PodSandboxId:366bee97de5b2407f50a1cbb1f93cff7abe3fb1fb256d81f9b05d62ed63f07e7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765614721906207640,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c1bba69-7ed7-4165-8c95-96b84fd3c6d0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7459c2ce3b80a40a713d64a6e31e0a0423bbbcfa2489d1fe378bf461d9f8794,PodSandboxId:d54d5ead7c19f583399a250329c4254906f884ca7cffab8ab3e2af0976fb791d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765614709043028065,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-bzgr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3b71930-7660-460b-b10c-f3b1e7fb90be,},Annotations:map[string]string{io.kubernetes.container.hash: 6f36061b,io.kub
ernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:76a36cc5fd27a762e3fc18f5dd652e00b09141db60c88a72a8bf8a03adbd4e95,PodSandboxId:61a8f89598c8e1635ebff114cb1f3393a11c47ccdaf70dcdfa1c2590e2423c4b,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:
1765614683576511008,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mbzzw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0b459c4b-cbd7-458f-8a23-5427d16adf42,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2574200a635574968d79ecba41e8bfb8af1e18d33fe1a7a34571011663a1a2b,PodSandboxId:b6f69c472fea5e80036a1a65fa66e34515557c416a991df445af651a469529de,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179
e,State:CONTAINER_EXITED,CreatedAt:1765614683247589832,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jwjv9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 753a5a02-7f66-43ff-9f26-b67823a58f51,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02f24c26b90be89ee1b39bc07bd77d8406e296fe18a2dfc7692a5fb767a975fc,PodSandboxId:f6e4b3a7f3b2cebda49bcac9249fd18530a7b6a80cf8de63b189f61befe03520,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777
a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765614662567685700,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40a1c68c-2c20-480c-9339-6eeb11a0e5d4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6018cebb698122a367e923d21ef2146b5952bf1db623c039d9c5bc8f4edb460,PodSandboxId:ded4eed2460929396d294695eaacb3e679ce19e5fdb445bb7e9bc2bbd6e92a7b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f
8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765614639585809431,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-fv8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ada580-f960-46ba-a686-1cf02b573962,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9fa976e19fa711cb458e6215fbe3e10fe811f008e25fe3c430cca26ed33945,PodSandboxId:4de8c11681f0a51f3ee7cc30dbbbbf7d7a490a025704aaa163480673131248f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530
d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765614636677728568,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f88dd7f0-f94c-48ca-a7b0-7461dc3a2e16,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c394f439277efeb86bf03ebd00c3259d06c6fc6f983dfbd688bf7df9bbb81d00,PodSandboxId:165bfc56736b4f9e8f5e4ae2f75baed13c8e177ab7047c95ed60cd8de8a59690,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e544396
9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765614629365625890,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qk82t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98132a09-ca4a-4070-b715-3def082d8cd1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb50ab34a166440aa33a57ee5c71f03a547a7abca69be43821c017e4f089d55d,PodSandboxId:42096456024fabaf7c4a400ccaa456f62b5a69ae56edf9fe104d1bb0d4110f79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765614628596986809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t9crl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50a42b7-5b85-4440-b27c-f3a2376cdfac,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:3cc9e2f1a4cb6ef6a44404dcb1587a4e43a789750fcf7cfe3525d4470a02049a,PodSandboxId:5666287ef23de91755fcd81697f8d770d1d8f097f014b8fdc5daa078003f6d25,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765614616381250543,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45f5da8a7d034120e63f54164f74715c,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5dbe8cce3d6ce5291793ae044ed8d5541cff87277434acc4b3df5605b7bcb49,PodSandboxId:3a1ef997a680ca0ba3454fc0be74272338e93d10942c95f2a4bbedbe9958a341,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765614616348314996,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfde6bd3d488a9800f2e4971558d5ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e3580417c1e7a17a97e5bc733321c2e8a09689aef96d95594f00eb9208bca8,PodSandboxId:7d2cf9d2fda04cba1cdbe9ec1ca44112ef1f66f67d38e63178f17f420da73dc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765614616293712024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4e927105679e7941071b339f30dde,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubern
etes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c211ba7e8e2758a6034624ff132895246615851f7010c72af0d58d5dcc29b4,PodSandboxId:b1eddffe49be62022cc7f3005046ed23d842c0663dfcc83b3b8439048a31322d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765614615893104196,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b87e63
5a2b52b03e000348992f684,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=435f0f49-a32c-4a1d-a356-c8c1b1dc3a54 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.085294295Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ce1c569-c57f-474b-a0c7-40503b1be067 name=/runtime.v1.RuntimeService/Version
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.085428445Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ce1c569-c57f-474b-a0c7-40503b1be067 name=/runtime.v1.RuntimeService/Version
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.087132350Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1991686-5c39-45c4-ab51-f53cd13cf547 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.088416557Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765614906088384974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:546838,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1991686-5c39-45c4-ab51-f53cd13cf547 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.089723297Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2cd3c1d8-3f56-4c5d-8a71-2bd017dfa094 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.089840229Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2cd3c1d8-3f56-4c5d-8a71-2bd017dfa094 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.090736335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f6d4159bb313b29b848d2a01545e2bd972786fdba122275f9cf4a27684260fc,PodSandboxId:68a2d4cc7bf7c1e50e10d7b1c4038ef53b43a7696c313896c584514f02490911,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765614764509897600,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01c1c75f-6820-4ed0-adec-927c0fe8b534,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a463f2fb926733e1d47efa227c43a1d469c24d567b16b61d48151c6df2d0dbc,PodSandboxId:366bee97de5b2407f50a1cbb1f93cff7abe3fb1fb256d81f9b05d62ed63f07e7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765614721906207640,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c1bba69-7ed7-4165-8c95-96b84fd3c6d0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7459c2ce3b80a40a713d64a6e31e0a0423bbbcfa2489d1fe378bf461d9f8794,PodSandboxId:d54d5ead7c19f583399a250329c4254906f884ca7cffab8ab3e2af0976fb791d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765614709043028065,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-bzgr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3b71930-7660-460b-b10c-f3b1e7fb90be,},Annotations:map[string]string{io.kubernetes.container.hash: 6f36061b,io.kub
ernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:76a36cc5fd27a762e3fc18f5dd652e00b09141db60c88a72a8bf8a03adbd4e95,PodSandboxId:61a8f89598c8e1635ebff114cb1f3393a11c47ccdaf70dcdfa1c2590e2423c4b,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:
1765614683576511008,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mbzzw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0b459c4b-cbd7-458f-8a23-5427d16adf42,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2574200a635574968d79ecba41e8bfb8af1e18d33fe1a7a34571011663a1a2b,PodSandboxId:b6f69c472fea5e80036a1a65fa66e34515557c416a991df445af651a469529de,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179
e,State:CONTAINER_EXITED,CreatedAt:1765614683247589832,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jwjv9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 753a5a02-7f66-43ff-9f26-b67823a58f51,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02f24c26b90be89ee1b39bc07bd77d8406e296fe18a2dfc7692a5fb767a975fc,PodSandboxId:f6e4b3a7f3b2cebda49bcac9249fd18530a7b6a80cf8de63b189f61befe03520,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777
a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765614662567685700,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40a1c68c-2c20-480c-9339-6eeb11a0e5d4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6018cebb698122a367e923d21ef2146b5952bf1db623c039d9c5bc8f4edb460,PodSandboxId:ded4eed2460929396d294695eaacb3e679ce19e5fdb445bb7e9bc2bbd6e92a7b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f
8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765614639585809431,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-fv8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ada580-f960-46ba-a686-1cf02b573962,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9fa976e19fa711cb458e6215fbe3e10fe811f008e25fe3c430cca26ed33945,PodSandboxId:4de8c11681f0a51f3ee7cc30dbbbbf7d7a490a025704aaa163480673131248f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530
d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765614636677728568,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f88dd7f0-f94c-48ca-a7b0-7461dc3a2e16,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c394f439277efeb86bf03ebd00c3259d06c6fc6f983dfbd688bf7df9bbb81d00,PodSandboxId:165bfc56736b4f9e8f5e4ae2f75baed13c8e177ab7047c95ed60cd8de8a59690,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e544396
9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765614629365625890,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qk82t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98132a09-ca4a-4070-b715-3def082d8cd1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb50ab34a166440aa33a57ee5c71f03a547a7abca69be43821c017e4f089d55d,PodSandboxId:42096456024fabaf7c4a400ccaa456f62b5a69ae56edf9fe104d1bb0d4110f79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765614628596986809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t9crl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50a42b7-5b85-4440-b27c-f3a2376cdfac,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:3cc9e2f1a4cb6ef6a44404dcb1587a4e43a789750fcf7cfe3525d4470a02049a,PodSandboxId:5666287ef23de91755fcd81697f8d770d1d8f097f014b8fdc5daa078003f6d25,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765614616381250543,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45f5da8a7d034120e63f54164f74715c,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5dbe8cce3d6ce5291793ae044ed8d5541cff87277434acc4b3df5605b7bcb49,PodSandboxId:3a1ef997a680ca0ba3454fc0be74272338e93d10942c95f2a4bbedbe9958a341,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765614616348314996,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfde6bd3d488a9800f2e4971558d5ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e3580417c1e7a17a97e5bc733321c2e8a09689aef96d95594f00eb9208bca8,PodSandboxId:7d2cf9d2fda04cba1cdbe9ec1ca44112ef1f66f67d38e63178f17f420da73dc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765614616293712024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4e927105679e7941071b339f30dde,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubern
etes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c211ba7e8e2758a6034624ff132895246615851f7010c72af0d58d5dcc29b4,PodSandboxId:b1eddffe49be62022cc7f3005046ed23d842c0663dfcc83b3b8439048a31322d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765614615893104196,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b87e63
5a2b52b03e000348992f684,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2cd3c1d8-3f56-4c5d-8a71-2bd017dfa094 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.124069366Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50975def-74d3-4846-b608-07fea916e659 name=/runtime.v1.RuntimeService/Version
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.124181480Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50975def-74d3-4846-b608-07fea916e659 name=/runtime.v1.RuntimeService/Version
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.126118275Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=59e28ff8-1e85-43f5-8e61-46f96be2ddd0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.128163708Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765614906128133150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:546838,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59e28ff8-1e85-43f5-8e61-46f96be2ddd0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.129190963Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e28964b9-c44e-4145-bfa2-e7b8bf00dd94 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.129248146Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e28964b9-c44e-4145-bfa2-e7b8bf00dd94 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.129674802Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f6d4159bb313b29b848d2a01545e2bd972786fdba122275f9cf4a27684260fc,PodSandboxId:68a2d4cc7bf7c1e50e10d7b1c4038ef53b43a7696c313896c584514f02490911,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765614764509897600,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01c1c75f-6820-4ed0-adec-927c0fe8b534,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a463f2fb926733e1d47efa227c43a1d469c24d567b16b61d48151c6df2d0dbc,PodSandboxId:366bee97de5b2407f50a1cbb1f93cff7abe3fb1fb256d81f9b05d62ed63f07e7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765614721906207640,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c1bba69-7ed7-4165-8c95-96b84fd3c6d0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7459c2ce3b80a40a713d64a6e31e0a0423bbbcfa2489d1fe378bf461d9f8794,PodSandboxId:d54d5ead7c19f583399a250329c4254906f884ca7cffab8ab3e2af0976fb791d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765614709043028065,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-bzgr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3b71930-7660-460b-b10c-f3b1e7fb90be,},Annotations:map[string]string{io.kubernetes.container.hash: 6f36061b,io.kub
ernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:76a36cc5fd27a762e3fc18f5dd652e00b09141db60c88a72a8bf8a03adbd4e95,PodSandboxId:61a8f89598c8e1635ebff114cb1f3393a11c47ccdaf70dcdfa1c2590e2423c4b,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:
1765614683576511008,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mbzzw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0b459c4b-cbd7-458f-8a23-5427d16adf42,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2574200a635574968d79ecba41e8bfb8af1e18d33fe1a7a34571011663a1a2b,PodSandboxId:b6f69c472fea5e80036a1a65fa66e34515557c416a991df445af651a469529de,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179
e,State:CONTAINER_EXITED,CreatedAt:1765614683247589832,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jwjv9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 753a5a02-7f66-43ff-9f26-b67823a58f51,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02f24c26b90be89ee1b39bc07bd77d8406e296fe18a2dfc7692a5fb767a975fc,PodSandboxId:f6e4b3a7f3b2cebda49bcac9249fd18530a7b6a80cf8de63b189f61befe03520,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777
a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765614662567685700,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40a1c68c-2c20-480c-9339-6eeb11a0e5d4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6018cebb698122a367e923d21ef2146b5952bf1db623c039d9c5bc8f4edb460,PodSandboxId:ded4eed2460929396d294695eaacb3e679ce19e5fdb445bb7e9bc2bbd6e92a7b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f
8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765614639585809431,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-fv8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ada580-f960-46ba-a686-1cf02b573962,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9fa976e19fa711cb458e6215fbe3e10fe811f008e25fe3c430cca26ed33945,PodSandboxId:4de8c11681f0a51f3ee7cc30dbbbbf7d7a490a025704aaa163480673131248f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530
d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765614636677728568,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f88dd7f0-f94c-48ca-a7b0-7461dc3a2e16,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c394f439277efeb86bf03ebd00c3259d06c6fc6f983dfbd688bf7df9bbb81d00,PodSandboxId:165bfc56736b4f9e8f5e4ae2f75baed13c8e177ab7047c95ed60cd8de8a59690,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e544396
9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765614629365625890,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qk82t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98132a09-ca4a-4070-b715-3def082d8cd1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb50ab34a166440aa33a57ee5c71f03a547a7abca69be43821c017e4f089d55d,PodSandboxId:42096456024fabaf7c4a400ccaa456f62b5a69ae56edf9fe104d1bb0d4110f79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765614628596986809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t9crl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50a42b7-5b85-4440-b27c-f3a2376cdfac,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:3cc9e2f1a4cb6ef6a44404dcb1587a4e43a789750fcf7cfe3525d4470a02049a,PodSandboxId:5666287ef23de91755fcd81697f8d770d1d8f097f014b8fdc5daa078003f6d25,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765614616381250543,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45f5da8a7d034120e63f54164f74715c,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5dbe8cce3d6ce5291793ae044ed8d5541cff87277434acc4b3df5605b7bcb49,PodSandboxId:3a1ef997a680ca0ba3454fc0be74272338e93d10942c95f2a4bbedbe9958a341,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765614616348314996,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfde6bd3d488a9800f2e4971558d5ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e3580417c1e7a17a97e5bc733321c2e8a09689aef96d95594f00eb9208bca8,PodSandboxId:7d2cf9d2fda04cba1cdbe9ec1ca44112ef1f66f67d38e63178f17f420da73dc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765614616293712024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4e927105679e7941071b339f30dde,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubern
etes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c211ba7e8e2758a6034624ff132895246615851f7010c72af0d58d5dcc29b4,PodSandboxId:b1eddffe49be62022cc7f3005046ed23d842c0663dfcc83b3b8439048a31322d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765614615893104196,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b87e63
5a2b52b03e000348992f684,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e28964b9-c44e-4145-bfa2-e7b8bf00dd94 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.161039888Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9fba7d92-75dc-432b-b001-cda006972dda name=/runtime.v1.RuntimeService/Version
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.161361433Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9fba7d92-75dc-432b-b001-cda006972dda name=/runtime.v1.RuntimeService/Version
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.163209917Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=050d05fd-961f-41ba-85dc-c2b3fa6d4627 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.164401387Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765614906164372834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:546838,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=050d05fd-961f-41ba-85dc-c2b3fa6d4627 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.166106344Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72a2967d-c952-420c-84be-c1ca25690f16 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.166280023Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72a2967d-c952-420c-84be-c1ca25690f16 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.166673187Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f6d4159bb313b29b848d2a01545e2bd972786fdba122275f9cf4a27684260fc,PodSandboxId:68a2d4cc7bf7c1e50e10d7b1c4038ef53b43a7696c313896c584514f02490911,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765614764509897600,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01c1c75f-6820-4ed0-adec-927c0fe8b534,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a463f2fb926733e1d47efa227c43a1d469c24d567b16b61d48151c6df2d0dbc,PodSandboxId:366bee97de5b2407f50a1cbb1f93cff7abe3fb1fb256d81f9b05d62ed63f07e7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765614721906207640,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c1bba69-7ed7-4165-8c95-96b84fd3c6d0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7459c2ce3b80a40a713d64a6e31e0a0423bbbcfa2489d1fe378bf461d9f8794,PodSandboxId:d54d5ead7c19f583399a250329c4254906f884ca7cffab8ab3e2af0976fb791d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765614709043028065,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-bzgr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3b71930-7660-460b-b10c-f3b1e7fb90be,},Annotations:map[string]string{io.kubernetes.container.hash: 6f36061b,io.kub
ernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:76a36cc5fd27a762e3fc18f5dd652e00b09141db60c88a72a8bf8a03adbd4e95,PodSandboxId:61a8f89598c8e1635ebff114cb1f3393a11c47ccdaf70dcdfa1c2590e2423c4b,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:
1765614683576511008,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mbzzw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0b459c4b-cbd7-458f-8a23-5427d16adf42,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2574200a635574968d79ecba41e8bfb8af1e18d33fe1a7a34571011663a1a2b,PodSandboxId:b6f69c472fea5e80036a1a65fa66e34515557c416a991df445af651a469529de,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179
e,State:CONTAINER_EXITED,CreatedAt:1765614683247589832,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jwjv9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 753a5a02-7f66-43ff-9f26-b67823a58f51,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02f24c26b90be89ee1b39bc07bd77d8406e296fe18a2dfc7692a5fb767a975fc,PodSandboxId:f6e4b3a7f3b2cebda49bcac9249fd18530a7b6a80cf8de63b189f61befe03520,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777
a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765614662567685700,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40a1c68c-2c20-480c-9339-6eeb11a0e5d4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6018cebb698122a367e923d21ef2146b5952bf1db623c039d9c5bc8f4edb460,PodSandboxId:ded4eed2460929396d294695eaacb3e679ce19e5fdb445bb7e9bc2bbd6e92a7b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f
8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765614639585809431,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-fv8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ada580-f960-46ba-a686-1cf02b573962,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9fa976e19fa711cb458e6215fbe3e10fe811f008e25fe3c430cca26ed33945,PodSandboxId:4de8c11681f0a51f3ee7cc30dbbbbf7d7a490a025704aaa163480673131248f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530
d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765614636677728568,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f88dd7f0-f94c-48ca-a7b0-7461dc3a2e16,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c394f439277efeb86bf03ebd00c3259d06c6fc6f983dfbd688bf7df9bbb81d00,PodSandboxId:165bfc56736b4f9e8f5e4ae2f75baed13c8e177ab7047c95ed60cd8de8a59690,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e544396
9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765614629365625890,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qk82t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98132a09-ca4a-4070-b715-3def082d8cd1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb50ab34a166440aa33a57ee5c71f03a547a7abca69be43821c017e4f089d55d,PodSandboxId:42096456024fabaf7c4a400ccaa456f62b5a69ae56edf9fe104d1bb0d4110f79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765614628596986809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t9crl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50a42b7-5b85-4440-b27c-f3a2376cdfac,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:3cc9e2f1a4cb6ef6a44404dcb1587a4e43a789750fcf7cfe3525d4470a02049a,PodSandboxId:5666287ef23de91755fcd81697f8d770d1d8f097f014b8fdc5daa078003f6d25,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765614616381250543,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45f5da8a7d034120e63f54164f74715c,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5dbe8cce3d6ce5291793ae044ed8d5541cff87277434acc4b3df5605b7bcb49,PodSandboxId:3a1ef997a680ca0ba3454fc0be74272338e93d10942c95f2a4bbedbe9958a341,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765614616348314996,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfde6bd3d488a9800f2e4971558d5ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e3580417c1e7a17a97e5bc733321c2e8a09689aef96d95594f00eb9208bca8,PodSandboxId:7d2cf9d2fda04cba1cdbe9ec1ca44112ef1f66f67d38e63178f17f420da73dc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765614616293712024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4e927105679e7941071b339f30dde,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubern
etes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c211ba7e8e2758a6034624ff132895246615851f7010c72af0d58d5dcc29b4,PodSandboxId:b1eddffe49be62022cc7f3005046ed23d842c0663dfcc83b3b8439048a31322d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765614615893104196,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b87e63
5a2b52b03e000348992f684,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72a2967d-c952-420c-84be-c1ca25690f16 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	1f6d4159bb313       a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c                                                             2 minutes ago       Running             nginx                     0                   68a2d4cc7bf7c       nginx                                       default
	9a463f2fb9267       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   366bee97de5b2       busybox                                     default
	a7459c2ce3b80       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad             3 minutes ago       Running             controller                0                   d54d5ead7c19f       ingress-nginx-controller-85d4c799dd-bzgr8   ingress-nginx
	76a36cc5fd27a       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                             3 minutes ago       Exited              patch                     1                   61a8f89598c8e       ingress-nginx-admission-patch-mbzzw         ingress-nginx
	d2574200a6355       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              create                    0                   b6f69c472fea5       ingress-nginx-admission-create-jwjv9        ingress-nginx
	02f24c26b90be       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   f6e4b3a7f3b2c       kube-ingress-dns-minikube                   kube-system
	d6018cebb6981       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   ded4eed246092       amd-gpu-device-plugin-fv8qk                 kube-system
	5b9fa976e19fa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   4de8c11681f0a       storage-provisioner                         kube-system
	c394f439277ef       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   165bfc56736b4       coredns-66bc5c9577-qk82t                    kube-system
	bb50ab34a1664       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                             4 minutes ago       Running             kube-proxy                0                   42096456024fa       kube-proxy-t9crl                            kube-system
	3cc9e2f1a4cb6       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                             4 minutes ago       Running             kube-controller-manager   0                   5666287ef23de       kube-controller-manager-addons-917695       kube-system
	c5dbe8cce3d6c       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago       Running             etcd                      0                   3a1ef997a680c       etcd-addons-917695                          kube-system
	26e3580417c1e       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                             4 minutes ago       Running             kube-apiserver            0                   7d2cf9d2fda04       kube-apiserver-addons-917695                kube-system
	d6c211ba7e8e2       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                             4 minutes ago       Running             kube-scheduler            0                   b1eddffe49be6       kube-scheduler-addons-917695                kube-system
	
	
	==> coredns [c394f439277efeb86bf03ebd00c3259d06c6fc6f983dfbd688bf7df9bbb81d00] <==
	[INFO] 10.244.0.8:32806 - 19357 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000394978s
	[INFO] 10.244.0.8:32806 - 31307 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000158517s
	[INFO] 10.244.0.8:32806 - 33851 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00008586s
	[INFO] 10.244.0.8:32806 - 6566 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00010822s
	[INFO] 10.244.0.8:32806 - 42666 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000078394s
	[INFO] 10.244.0.8:32806 - 41581 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000164022s
	[INFO] 10.244.0.8:32806 - 61292 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000113154s
	[INFO] 10.244.0.8:51130 - 48536 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000156432s
	[INFO] 10.244.0.8:51130 - 48864 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000324038s
	[INFO] 10.244.0.8:52414 - 63175 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114249s
	[INFO] 10.244.0.8:52414 - 62953 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000126653s
	[INFO] 10.244.0.8:47288 - 39992 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094012s
	[INFO] 10.244.0.8:47288 - 40228 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000113724s
	[INFO] 10.244.0.8:47087 - 7658 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000088475s
	[INFO] 10.244.0.8:47087 - 7215 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000201639s
	[INFO] 10.244.0.23:46684 - 34471 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000392123s
	[INFO] 10.244.0.23:35847 - 16168 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000500956s
	[INFO] 10.244.0.23:55738 - 24299 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00010832s
	[INFO] 10.244.0.23:60736 - 12659 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000102728s
	[INFO] 10.244.0.23:57686 - 17346 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000185445s
	[INFO] 10.244.0.23:45048 - 50868 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000081868s
	[INFO] 10.244.0.23:48212 - 3347 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001195853s
	[INFO] 10.244.0.23:52309 - 55971 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 306 0.001413875s
	[INFO] 10.244.0.28:47364 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000419606s
	[INFO] 10.244.0.28:59444 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000165626s
	
	
	==> describe nodes <==
	Name:               addons-917695
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-917695
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=addons-917695
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T08_30_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-917695
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 08:30:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-917695
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 08:34:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 08:32:55 +0000   Sat, 13 Dec 2025 08:30:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 08:32:55 +0000   Sat, 13 Dec 2025 08:30:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 08:32:55 +0000   Sat, 13 Dec 2025 08:30:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 08:32:55 +0000   Sat, 13 Dec 2025 08:30:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.154
	  Hostname:    addons-917695
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 412eefcb63ce429c917fa5530725ef67
	  System UUID:                412eefcb-63ce-429c-917f-a5530725ef67
	  Boot ID:                    c5eef4a8-274f-4b8e-afb8-04f83410bea1
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     hello-world-app-5d498dc89-p9lr2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-bzgr8    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m30s
	  kube-system                 amd-gpu-device-plugin-fv8qk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 coredns-66bc5c9577-qk82t                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m39s
	  kube-system                 etcd-addons-917695                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m44s
	  kube-system                 kube-apiserver-addons-917695                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 kube-controller-manager-addons-917695        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 kube-proxy-t9crl                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 kube-scheduler-addons-917695                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m36s                  kube-proxy       
	  Normal  Starting                 4m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m51s (x8 over 4m51s)  kubelet          Node addons-917695 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m51s (x8 over 4m51s)  kubelet          Node addons-917695 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m51s (x7 over 4m51s)  kubelet          Node addons-917695 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m44s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m44s                  kubelet          Node addons-917695 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m44s                  kubelet          Node addons-917695 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m44s                  kubelet          Node addons-917695 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m43s                  kubelet          Node addons-917695 status is now: NodeReady
	  Normal  RegisteredNode           4m40s                  node-controller  Node addons-917695 event: Registered Node addons-917695 in Controller
	
	
	==> dmesg <==
	[  +0.000019] kauditd_printk_skb: 312 callbacks suppressed
	[  +0.426626] kauditd_printk_skb: 323 callbacks suppressed
	[  +5.901503] kauditd_printk_skb: 374 callbacks suppressed
	[  +6.331895] kauditd_printk_skb: 5 callbacks suppressed
	[Dec13 08:31] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.857601] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.694150] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.648703] kauditd_printk_skb: 38 callbacks suppressed
	[  +1.842879] kauditd_printk_skb: 121 callbacks suppressed
	[  +7.350415] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000096] kauditd_printk_skb: 201 callbacks suppressed
	[  +2.215664] kauditd_printk_skb: 65 callbacks suppressed
	[  +8.366644] kauditd_printk_skb: 47 callbacks suppressed
	[Dec13 08:32] kauditd_printk_skb: 47 callbacks suppressed
	[ +11.140707] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.000045] kauditd_printk_skb: 22 callbacks suppressed
	[  +1.394635] kauditd_printk_skb: 107 callbacks suppressed
	[  +0.857730] kauditd_printk_skb: 99 callbacks suppressed
	[  +0.000032] kauditd_printk_skb: 103 callbacks suppressed
	[  +3.788578] kauditd_printk_skb: 141 callbacks suppressed
	[  +4.055829] kauditd_printk_skb: 94 callbacks suppressed
	[Dec13 08:33] kauditd_printk_skb: 35 callbacks suppressed
	[  +0.462567] kauditd_printk_skb: 91 callbacks suppressed
	[  +1.628936] kauditd_printk_skb: 44 callbacks suppressed
	[Dec13 08:35] kauditd_printk_skb: 107 callbacks suppressed
	
	
	==> etcd [c5dbe8cce3d6ce5291793ae044ed8d5541cff87277434acc4b3df5605b7bcb49] <==
	{"level":"info","ts":"2025-12-13T08:31:40.192618Z","caller":"traceutil/trace.go:172","msg":"trace[505564146] transaction","detail":"{read_only:false; response_revision:1142; number_of_response:1; }","duration":"129.97278ms","start":"2025-12-13T08:31:40.062404Z","end":"2025-12-13T08:31:40.192377Z","steps":["trace[505564146] 'process raft request'  (duration: 129.121809ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T08:31:50.909660Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.987607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-12-13T08:31:50.909728Z","caller":"traceutil/trace.go:172","msg":"trace[614981978] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1190; }","duration":"159.061424ms","start":"2025-12-13T08:31:50.750656Z","end":"2025-12-13T08:31:50.909717Z","steps":["trace[614981978] 'range keys from in-memory index tree'  (duration: 158.893967ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T08:31:50.911937Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.469468ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T08:31:50.912001Z","caller":"traceutil/trace.go:172","msg":"trace[905319841] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1190; }","duration":"135.760797ms","start":"2025-12-13T08:31:50.776229Z","end":"2025-12-13T08:31:50.911990Z","steps":["trace[905319841] 'range keys from in-memory index tree'  (duration: 133.168659ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T08:31:54.675777Z","caller":"traceutil/trace.go:172","msg":"trace[1830533199] transaction","detail":"{read_only:false; response_revision:1202; number_of_response:1; }","duration":"145.254738ms","start":"2025-12-13T08:31:54.530509Z","end":"2025-12-13T08:31:54.675764Z","steps":["trace[1830533199] 'process raft request'  (duration: 145.152074ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T08:32:00.516681Z","caller":"traceutil/trace.go:172","msg":"trace[1750467939] transaction","detail":"{read_only:false; response_revision:1234; number_of_response:1; }","duration":"125.984338ms","start":"2025-12-13T08:32:00.390684Z","end":"2025-12-13T08:32:00.516668Z","steps":["trace[1750467939] 'process raft request'  (duration: 125.869471ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T08:32:24.899858Z","caller":"traceutil/trace.go:172","msg":"trace[1998752364] linearizableReadLoop","detail":"{readStateIndex:1433; appliedIndex:1433; }","duration":"114.274356ms","start":"2025-12-13T08:32:24.785555Z","end":"2025-12-13T08:32:24.899830Z","steps":["trace[1998752364] 'read index received'  (duration: 114.269921ms)","trace[1998752364] 'applied index is now lower than readState.Index'  (duration: 3.776µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T08:32:24.900777Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.143526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/kube-system/nvidia-device-plugin-daemonset-9884d469d\" limit:1 ","response":"range_response_count:1 size:2898"}
	{"level":"info","ts":"2025-12-13T08:32:24.900836Z","caller":"traceutil/trace.go:172","msg":"trace[738720322] range","detail":"{range_begin:/registry/controllerrevisions/kube-system/nvidia-device-plugin-daemonset-9884d469d; range_end:; response_count:1; response_revision:1392; }","duration":"115.275408ms","start":"2025-12-13T08:32:24.785552Z","end":"2025-12-13T08:32:24.900828Z","steps":["trace[738720322] 'agreement among raft nodes before linearized reading'  (duration: 114.478285ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T08:32:24.900932Z","caller":"traceutil/trace.go:172","msg":"trace[1335731661] transaction","detail":"{read_only:false; response_revision:1393; number_of_response:1; }","duration":"117.808554ms","start":"2025-12-13T08:32:24.783110Z","end":"2025-12-13T08:32:24.900919Z","steps":["trace[1335731661] 'process raft request'  (duration: 116.705848ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T08:32:24.901103Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.509378ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T08:32:24.901155Z","caller":"traceutil/trace.go:172","msg":"trace[176730814] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1393; }","duration":"115.563323ms","start":"2025-12-13T08:32:24.785586Z","end":"2025-12-13T08:32:24.901149Z","steps":["trace[176730814] 'agreement among raft nodes before linearized reading'  (duration: 115.489872ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T08:32:24.901308Z","caller":"traceutil/trace.go:172","msg":"trace[1924384903] transaction","detail":"{read_only:false; response_revision:1394; number_of_response:1; }","duration":"108.399754ms","start":"2025-12-13T08:32:24.792903Z","end":"2025-12-13T08:32:24.901303Z","steps":["trace[1924384903] 'process raft request'  (duration: 108.353136ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T08:32:28.361580Z","caller":"traceutil/trace.go:172","msg":"trace[352310809] linearizableReadLoop","detail":"{readStateIndex:1453; appliedIndex:1453; }","duration":"225.244207ms","start":"2025-12-13T08:32:28.136316Z","end":"2025-12-13T08:32:28.361560Z","steps":["trace[352310809] 'read index received'  (duration: 225.239099ms)","trace[352310809] 'applied index is now lower than readState.Index'  (duration: 4.238µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T08:32:28.361780Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"225.465855ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" limit:1 ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2025-12-13T08:32:28.361801Z","caller":"traceutil/trace.go:172","msg":"trace[975671476] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1411; }","duration":"225.50974ms","start":"2025-12-13T08:32:28.136285Z","end":"2025-12-13T08:32:28.361795Z","steps":["trace[975671476] 'agreement among raft nodes before linearized reading'  (duration: 225.375061ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T08:32:28.362126Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"220.065923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T08:32:28.362148Z","caller":"traceutil/trace.go:172","msg":"trace[1761542371] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1412; }","duration":"220.092226ms","start":"2025-12-13T08:32:28.142050Z","end":"2025-12-13T08:32:28.362143Z","steps":["trace[1761542371] 'agreement among raft nodes before linearized reading'  (duration: 220.053769ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T08:32:28.362343Z","caller":"traceutil/trace.go:172","msg":"trace[1658786718] transaction","detail":"{read_only:false; response_revision:1412; number_of_response:1; }","duration":"242.257919ms","start":"2025-12-13T08:32:28.120077Z","end":"2025-12-13T08:32:28.362335Z","steps":["trace[1658786718] 'process raft request'  (duration: 241.916727ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T08:32:28.362626Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"168.644999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T08:32:28.362709Z","caller":"traceutil/trace.go:172","msg":"trace[837552846] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1412; }","duration":"168.732597ms","start":"2025-12-13T08:32:28.193970Z","end":"2025-12-13T08:32:28.362703Z","steps":["trace[837552846] 'agreement among raft nodes before linearized reading'  (duration: 168.62688ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T08:32:28.362924Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.664707ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T08:32:28.362967Z","caller":"traceutil/trace.go:172","msg":"trace[629439109] range","detail":"{range_begin:/registry/networkpolicies; range_end:; response_count:0; response_revision:1412; }","duration":"136.710728ms","start":"2025-12-13T08:32:28.226251Z","end":"2025-12-13T08:32:28.362962Z","steps":["trace[629439109] 'agreement among raft nodes before linearized reading'  (duration: 136.652088ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T08:32:39.043108Z","caller":"traceutil/trace.go:172","msg":"trace[1544671139] transaction","detail":"{read_only:false; response_revision:1522; number_of_response:1; }","duration":"230.997862ms","start":"2025-12-13T08:32:38.812096Z","end":"2025-12-13T08:32:39.043094Z","steps":["trace[1544671139] 'process raft request'  (duration: 230.885189ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:35:06 up 5 min,  0 users,  load average: 0.66, 1.27, 0.66
	Linux addons-917695 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [26e3580417c1e7a17a97e5bc733321c2e8a09689aef96d95594f00eb9208bca8] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 08:31:25.687073       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1213 08:32:09.469080       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:36178: use of closed network connection
	E1213 08:32:09.671962       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:56638: use of closed network connection
	I1213 08:32:19.112051       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.236.225"}
	I1213 08:32:43.769687       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1213 08:32:43.948575       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.229.196"}
	I1213 08:32:47.306127       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1213 08:32:49.865606       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1213 08:33:10.010315       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 08:33:10.010603       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 08:33:10.030004       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 08:33:10.030120       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 08:33:10.045825       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 08:33:10.045894       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 08:33:10.078488       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 08:33:10.078644       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 08:33:10.096774       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 08:33:10.097140       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1213 08:33:11.032004       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1213 08:33:11.097372       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1213 08:33:11.114025       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1213 08:33:26.637879       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1213 08:35:05.025623       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.6.177"}
	
	
	==> kube-controller-manager [3cc9e2f1a4cb6ef6a44404dcb1587a4e43a789750fcf7cfe3525d4470a02049a] <==
	E1213 08:33:21.060245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 08:33:25.860955       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 08:33:25.861985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1213 08:33:26.827320       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1213 08:33:26.827482       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 08:33:26.874584       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1213 08:33:26.874638       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1213 08:33:27.545198       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 08:33:27.546636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 08:33:32.520161       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 08:33:32.521343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 08:33:44.709709       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 08:33:44.710726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 08:33:46.901692       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 08:33:46.902899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 08:33:50.096569       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 08:33:50.097679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 08:34:22.637317       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 08:34:22.638399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 08:34:24.429646       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 08:34:24.431175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 08:34:27.099275       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 08:34:27.100737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 08:34:55.083053       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 08:34:55.084038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [bb50ab34a166440aa33a57ee5c71f03a547a7abca69be43821c017e4f089d55d] <==
	I1213 08:30:29.534238       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 08:30:29.639757       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 08:30:29.639805       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.154"]
	E1213 08:30:29.639937       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 08:30:29.822742       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 08:30:29.823589       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 08:30:29.823647       1 server_linux.go:132] "Using iptables Proxier"
	I1213 08:30:29.846107       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 08:30:29.846406       1 server.go:527] "Version info" version="v1.34.2"
	I1213 08:30:29.846417       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 08:30:29.860851       1 config.go:200] "Starting service config controller"
	I1213 08:30:29.860879       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 08:30:29.860899       1 config.go:106] "Starting endpoint slice config controller"
	I1213 08:30:29.860902       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 08:30:29.860913       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 08:30:29.860916       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 08:30:29.870887       1 config.go:309] "Starting node config controller"
	I1213 08:30:29.870915       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 08:30:29.962309       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 08:30:29.962377       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 08:30:29.962423       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 08:30:29.971059       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [d6c211ba7e8e2758a6034624ff132895246615851f7010c72af0d58d5dcc29b4] <==
	E1213 08:30:19.728326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 08:30:19.728393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 08:30:19.729187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 08:30:19.729292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 08:30:19.729649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 08:30:19.730072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 08:30:19.730114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 08:30:19.730149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 08:30:19.730241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 08:30:19.730257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 08:30:19.730512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 08:30:19.730527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 08:30:20.669591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 08:30:20.699594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 08:30:20.716527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 08:30:20.734836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 08:30:20.809970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 08:30:20.901428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 08:30:20.919187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 08:30:20.926052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 08:30:20.969424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 08:30:21.060189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 08:30:21.132419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 08:30:21.170997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1213 08:30:24.020280       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 08:33:24 addons-917695 kubelet[1506]: I1213 08:33:24.060787    1506 scope.go:117] "RemoveContainer" containerID="8b101398a88e9b37bc69d87389b2b45ed02bc301f5b50a322d07c7f5e6f56df8"
	Dec 13 08:33:24 addons-917695 kubelet[1506]: I1213 08:33:24.402682    1506 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 08:33:32 addons-917695 kubelet[1506]: E1213 08:33:32.761285    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765614812760852921 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
	Dec 13 08:33:32 addons-917695 kubelet[1506]: E1213 08:33:32.761308    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765614812760852921 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
	Dec 13 08:33:42 addons-917695 kubelet[1506]: E1213 08:33:42.766296    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765614822765625470 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
	Dec 13 08:33:42 addons-917695 kubelet[1506]: E1213 08:33:42.766332    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765614822765625470 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
	Dec 13 08:33:52 addons-917695 kubelet[1506]: E1213 08:33:52.776195    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765614832773888369 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
	Dec 13 08:33:52 addons-917695 kubelet[1506]: E1213 08:33:52.776599    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765614832773888369 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
	Dec 13 08:34:02 addons-917695 kubelet[1506]: E1213 08:34:02.779755    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765614842779263293 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
	Dec 13 08:34:02 addons-917695 kubelet[1506]: E1213 08:34:02.779796    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765614842779263293 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
	Dec 13 08:34:12 addons-917695 kubelet[1506]: E1213 08:34:12.782994    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765614852782385078 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
	Dec 13 08:34:12 addons-917695 kubelet[1506]: E1213 08:34:12.783024    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765614852782385078 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
	Dec 13 08:34:16 addons-917695 kubelet[1506]: I1213 08:34:16.403647    1506 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-fv8qk" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 08:34:22 addons-917695 kubelet[1506]: E1213 08:34:22.786470    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765614862785967196 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
	Dec 13 08:34:22 addons-917695 kubelet[1506]: E1213 08:34:22.786496    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765614862785967196 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
	Dec 13 08:34:32 addons-917695 kubelet[1506]: E1213 08:34:32.789381    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765614872788823829 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
	Dec 13 08:34:32 addons-917695 kubelet[1506]: E1213 08:34:32.789415    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765614872788823829 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
	Dec 13 08:34:34 addons-917695 kubelet[1506]: I1213 08:34:34.402587    1506 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 08:34:42 addons-917695 kubelet[1506]: E1213 08:34:42.792863    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765614882792333898 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
	Dec 13 08:34:42 addons-917695 kubelet[1506]: E1213 08:34:42.792897    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765614882792333898 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
	Dec 13 08:34:52 addons-917695 kubelet[1506]: E1213 08:34:52.796002    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765614892795519117 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
	Dec 13 08:34:52 addons-917695 kubelet[1506]: E1213 08:34:52.796028    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765614892795519117 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
	Dec 13 08:35:02 addons-917695 kubelet[1506]: E1213 08:35:02.799151    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765614902798684390 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
	Dec 13 08:35:02 addons-917695 kubelet[1506]: E1213 08:35:02.799191    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765614902798684390 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
	Dec 13 08:35:04 addons-917695 kubelet[1506]: I1213 08:35:04.971802    1506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85vmr\" (UniqueName: \"kubernetes.io/projected/e008d43b-2b4f-48ac-aa3e-45941b2bbf49-kube-api-access-85vmr\") pod \"hello-world-app-5d498dc89-p9lr2\" (UID: \"e008d43b-2b4f-48ac-aa3e-45941b2bbf49\") " pod="default/hello-world-app-5d498dc89-p9lr2"
	
	
	==> storage-provisioner [5b9fa976e19fa711cb458e6215fbe3e10fe811f008e25fe3c430cca26ed33945] <==
	W1213 08:34:42.205658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:34:44.209507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:34:44.218934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:34:46.223221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:34:46.231358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:34:48.234613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:34:48.240042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:34:50.243389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:34:50.249229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:34:52.254377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:34:52.260057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:34:54.263803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:34:54.269799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:34:56.273574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:34:56.279004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:34:58.283575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:34:58.289241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:35:00.293006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:35:00.298767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:35:02.303026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:35:02.308977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:35:04.312849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:35:04.321592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:35:06.326723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:35:06.335950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-917695 -n addons-917695
helpers_test.go:270: (dbg) Run:  kubectl --context addons-917695 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-p9lr2 ingress-nginx-admission-create-jwjv9 ingress-nginx-admission-patch-mbzzw
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-917695 describe pod hello-world-app-5d498dc89-p9lr2 ingress-nginx-admission-create-jwjv9 ingress-nginx-admission-patch-mbzzw
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-917695 describe pod hello-world-app-5d498dc89-p9lr2 ingress-nginx-admission-create-jwjv9 ingress-nginx-admission-patch-mbzzw: exit status 1 (76.15838ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-p9lr2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-917695/192.168.39.154
	Start Time:       Sat, 13 Dec 2025 08:35:04 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85vmr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-85vmr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-p9lr2 to addons-917695
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jwjv9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-mbzzw" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-917695 describe pod hello-world-app-5d498dc89-p9lr2 ingress-nginx-admission-create-jwjv9 ingress-nginx-admission-patch-mbzzw: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-917695 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-917695 addons disable ingress-dns --alsologtostderr -v=1: (1.669384811s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-917695 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-917695 addons disable ingress --alsologtostderr -v=1: (7.709223808s)
--- FAIL: TestAddons/parallel/Ingress (153.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-014502 image ls --format short --alsologtostderr: (2.468921761s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-014502 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-014502 image ls --format short --alsologtostderr:
I1213 08:42:05.513273   16495 out.go:360] Setting OutFile to fd 1 ...
I1213 08:42:05.513686   16495 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:42:05.513729   16495 out.go:374] Setting ErrFile to fd 2...
I1213 08:42:05.513739   16495 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:42:05.514152   16495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
I1213 08:42:05.515516   16495 config.go:182] Loaded profile config "functional-014502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:42:05.515679   16495 config.go:182] Loaded profile config "functional-014502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:42:05.518003   16495 ssh_runner.go:195] Run: systemctl --version
I1213 08:42:05.520554   16495 main.go:143] libmachine: domain functional-014502 has defined MAC address 52:54:00:6f:3f:70 in network mk-functional-014502
I1213 08:42:05.520986   16495 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6f:3f:70", ip: ""} in network mk-functional-014502: {Iface:virbr1 ExpiryTime:2025-12-13 09:39:10 +0000 UTC Type:0 Mac:52:54:00:6f:3f:70 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:functional-014502 Clientid:01:52:54:00:6f:3f:70}
I1213 08:42:05.521020   16495 main.go:143] libmachine: domain functional-014502 has defined IP address 192.168.39.248 and MAC address 52:54:00:6f:3f:70 in network mk-functional-014502
I1213 08:42:05.521180   16495 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/functional-014502/id_rsa Username:docker}
I1213 08:42:05.623024   16495 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 08:42:07.922644   16495 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.299587016s)
W1213 08:42:07.922705   16495 cache_images.go:736] Failed to list images for profile functional-014502 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1213 08:42:07.913303   10104 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="image:{}"
time="2025-12-13T08:42:07Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
functional_test.go:290: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.47s)

                                                
                                    
x
+
TestPreload (146.65s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-923878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-923878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m32.537756558s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-923878 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-923878 image pull gcr.io/k8s-minikube/busybox: (3.161509271s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-923878
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-923878: (6.988119323s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-923878 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1213 09:25:51.252202    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-923878 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (41.356600492s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-923878 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-13 09:26:26.007827576 +0000 UTC m=+3446.430218996
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-923878 -n test-preload-923878
helpers_test.go:253: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-923878 logs -n 25
helpers_test.go:261: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-613005 ssh -n multinode-613005-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-613005     │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ ssh     │ multinode-613005 ssh -n multinode-613005 sudo cat /home/docker/cp-test_multinode-613005-m03_multinode-613005.txt                                          │ multinode-613005     │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ cp      │ multinode-613005 cp multinode-613005-m03:/home/docker/cp-test.txt multinode-613005-m02:/home/docker/cp-test_multinode-613005-m03_multinode-613005-m02.txt │ multinode-613005     │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ ssh     │ multinode-613005 ssh -n multinode-613005-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-613005     │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ ssh     │ multinode-613005 ssh -n multinode-613005-m02 sudo cat /home/docker/cp-test_multinode-613005-m03_multinode-613005-m02.txt                                  │ multinode-613005     │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ node    │ multinode-613005 node stop m03                                                                                                                            │ multinode-613005     │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ node    │ multinode-613005 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-613005     │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:13 UTC │
	│ node    │ list -p multinode-613005                                                                                                                                  │ multinode-613005     │ jenkins │ v1.37.0 │ 13 Dec 25 09:13 UTC │                     │
	│ stop    │ -p multinode-613005                                                                                                                                       │ multinode-613005     │ jenkins │ v1.37.0 │ 13 Dec 25 09:13 UTC │ 13 Dec 25 09:16 UTC │
	│ start   │ -p multinode-613005 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-613005     │ jenkins │ v1.37.0 │ 13 Dec 25 09:16 UTC │ 13 Dec 25 09:18 UTC │
	│ node    │ list -p multinode-613005                                                                                                                                  │ multinode-613005     │ jenkins │ v1.37.0 │ 13 Dec 25 09:18 UTC │                     │
	│ node    │ multinode-613005 node delete m03                                                                                                                          │ multinode-613005     │ jenkins │ v1.37.0 │ 13 Dec 25 09:18 UTC │ 13 Dec 25 09:18 UTC │
	│ stop    │ multinode-613005 stop                                                                                                                                     │ multinode-613005     │ jenkins │ v1.37.0 │ 13 Dec 25 09:18 UTC │ 13 Dec 25 09:21 UTC │
	│ start   │ -p multinode-613005 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-613005     │ jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:23 UTC │
	│ node    │ list -p multinode-613005                                                                                                                                  │ multinode-613005     │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │                     │
	│ start   │ -p multinode-613005-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-613005-m02 │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │                     │
	│ start   │ -p multinode-613005-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-613005-m03 │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │ 13 Dec 25 09:23 UTC │
	│ node    │ add -p multinode-613005                                                                                                                                   │ multinode-613005     │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │                     │
	│ delete  │ -p multinode-613005-m03                                                                                                                                   │ multinode-613005-m03 │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │ 13 Dec 25 09:24 UTC │
	│ delete  │ -p multinode-613005                                                                                                                                       │ multinode-613005     │ jenkins │ v1.37.0 │ 13 Dec 25 09:24 UTC │ 13 Dec 25 09:24 UTC │
	│ start   │ -p test-preload-923878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-923878  │ jenkins │ v1.37.0 │ 13 Dec 25 09:24 UTC │ 13 Dec 25 09:25 UTC │
	│ image   │ test-preload-923878 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-923878  │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ stop    │ -p test-preload-923878                                                                                                                                    │ test-preload-923878  │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ start   │ -p test-preload-923878 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-923878  │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:26 UTC │
	│ image   │ test-preload-923878 image list                                                                                                                            │ test-preload-923878  │ jenkins │ v1.37.0 │ 13 Dec 25 09:26 UTC │ 13 Dec 25 09:26 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:25:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:25:44.517018   35922 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:25:44.517317   35922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:25:44.517328   35922 out.go:374] Setting ErrFile to fd 2...
	I1213 09:25:44.517335   35922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:25:44.517563   35922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
	I1213 09:25:44.518066   35922 out.go:368] Setting JSON to false
	I1213 09:25:44.518984   35922 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4088,"bootTime":1765613856,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:25:44.519047   35922 start.go:143] virtualization: kvm guest
	I1213 09:25:44.522025   35922 out.go:179] * [test-preload-923878] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:25:44.523344   35922 notify.go:221] Checking for updates...
	I1213 09:25:44.523376   35922 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:25:44.524578   35922 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:25:44.525724   35922 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5761/kubeconfig
	I1213 09:25:44.526904   35922 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5761/.minikube
	I1213 09:25:44.528034   35922 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:25:44.529173   35922 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:25:44.530693   35922 config.go:182] Loaded profile config "test-preload-923878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:25:44.531154   35922 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:25:44.567059   35922 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 09:25:44.568117   35922 start.go:309] selected driver: kvm2
	I1213 09:25:44.568132   35922 start.go:927] validating driver "kvm2" against &{Name:test-preload-923878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.2 ClusterName:test-preload-923878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:25:44.568232   35922 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:25:44.569101   35922 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:25:44.569123   35922 cni.go:84] Creating CNI manager for ""
	I1213 09:25:44.569174   35922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 09:25:44.569216   35922 start.go:353] cluster config:
	{Name:test-preload-923878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:test-preload-923878 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:25:44.569317   35922 iso.go:125] acquiring lock: {Name:mk6cfae0203e3172b0791a477e21fba41da25205 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:25:44.570701   35922 out.go:179] * Starting "test-preload-923878" primary control-plane node in "test-preload-923878" cluster
	I1213 09:25:44.571631   35922 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:25:44.571658   35922 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 09:25:44.571674   35922 cache.go:65] Caching tarball of preloaded images
	I1213 09:25:44.571747   35922 preload.go:238] Found /home/jenkins/minikube-integration/22128-5761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 09:25:44.571759   35922 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 09:25:44.571838   35922 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/test-preload-923878/config.json ...
	I1213 09:25:44.572016   35922 start.go:360] acquireMachinesLock for test-preload-923878: {Name:mk6c8e990a56a1510f4ba4283e9407bcc2a7ff5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 09:25:44.572054   35922 start.go:364] duration metric: took 22.317µs to acquireMachinesLock for "test-preload-923878"
	I1213 09:25:44.572067   35922 start.go:96] Skipping create...Using existing machine configuration
	I1213 09:25:44.572071   35922 fix.go:54] fixHost starting: 
	I1213 09:25:44.573890   35922 fix.go:112] recreateIfNeeded on test-preload-923878: state=Stopped err=<nil>
	W1213 09:25:44.573911   35922 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 09:25:44.575429   35922 out.go:252] * Restarting existing kvm2 VM for "test-preload-923878" ...
	I1213 09:25:44.575457   35922 main.go:143] libmachine: starting domain...
	I1213 09:25:44.575466   35922 main.go:143] libmachine: ensuring networks are active...
	I1213 09:25:44.576251   35922 main.go:143] libmachine: Ensuring network default is active
	I1213 09:25:44.576646   35922 main.go:143] libmachine: Ensuring network mk-test-preload-923878 is active
	I1213 09:25:44.577074   35922 main.go:143] libmachine: getting domain XML...
	I1213 09:25:44.578175   35922 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-923878</name>
	  <uuid>22a1879a-d4fc-4e74-80a5-796edc20d845</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22128-5761/.minikube/machines/test-preload-923878/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22128-5761/.minikube/machines/test-preload-923878/test-preload-923878.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:9c:da:32'/>
	      <source network='mk-test-preload-923878'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:6a:2e:65'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1213 09:25:45.870362   35922 main.go:143] libmachine: waiting for domain to start...
	I1213 09:25:45.871921   35922 main.go:143] libmachine: domain is now running
	I1213 09:25:45.871948   35922 main.go:143] libmachine: waiting for IP...
	I1213 09:25:45.872977   35922 main.go:143] libmachine: domain test-preload-923878 has defined MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:45.873629   35922 main.go:143] libmachine: domain test-preload-923878 has current primary IP address 192.168.39.20 and MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:45.873641   35922 main.go:143] libmachine: found domain IP: 192.168.39.20
	I1213 09:25:45.873646   35922 main.go:143] libmachine: reserving static IP address...
	I1213 09:25:45.874103   35922 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-923878", mac: "52:54:00:9c:da:32", ip: "192.168.39.20"} in network mk-test-preload-923878: {Iface:virbr1 ExpiryTime:2025-12-13 10:24:16 +0000 UTC Type:0 Mac:52:54:00:9c:da:32 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:test-preload-923878 Clientid:01:52:54:00:9c:da:32}
	I1213 09:25:45.874126   35922 main.go:143] libmachine: skip adding static IP to network mk-test-preload-923878 - found existing host DHCP lease matching {name: "test-preload-923878", mac: "52:54:00:9c:da:32", ip: "192.168.39.20"}
	I1213 09:25:45.874139   35922 main.go:143] libmachine: reserved static IP address 192.168.39.20 for domain test-preload-923878
	I1213 09:25:45.874148   35922 main.go:143] libmachine: waiting for SSH...
	I1213 09:25:45.874157   35922 main.go:143] libmachine: Getting to WaitForSSH function...
	I1213 09:25:45.876393   35922 main.go:143] libmachine: domain test-preload-923878 has defined MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:45.876862   35922 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:da:32", ip: ""} in network mk-test-preload-923878: {Iface:virbr1 ExpiryTime:2025-12-13 10:24:16 +0000 UTC Type:0 Mac:52:54:00:9c:da:32 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:test-preload-923878 Clientid:01:52:54:00:9c:da:32}
	I1213 09:25:45.876886   35922 main.go:143] libmachine: domain test-preload-923878 has defined IP address 192.168.39.20 and MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:45.877037   35922 main.go:143] libmachine: Using SSH client type: native
	I1213 09:25:45.877243   35922 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I1213 09:25:45.877252   35922 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1213 09:25:48.949583   35922 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I1213 09:25:55.029680   35922 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I1213 09:25:58.152223   35922 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:25:58.156050   35922 main.go:143] libmachine: domain test-preload-923878 has defined MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:58.156476   35922 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:da:32", ip: ""} in network mk-test-preload-923878: {Iface:virbr1 ExpiryTime:2025-12-13 10:25:55 +0000 UTC Type:0 Mac:52:54:00:9c:da:32 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:test-preload-923878 Clientid:01:52:54:00:9c:da:32}
	I1213 09:25:58.156528   35922 main.go:143] libmachine: domain test-preload-923878 has defined IP address 192.168.39.20 and MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:58.156779   35922 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/test-preload-923878/config.json ...
	I1213 09:25:58.156992   35922 machine.go:94] provisionDockerMachine start ...
	I1213 09:25:58.160278   35922 main.go:143] libmachine: domain test-preload-923878 has defined MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:58.160715   35922 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:da:32", ip: ""} in network mk-test-preload-923878: {Iface:virbr1 ExpiryTime:2025-12-13 10:25:55 +0000 UTC Type:0 Mac:52:54:00:9c:da:32 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:test-preload-923878 Clientid:01:52:54:00:9c:da:32}
	I1213 09:25:58.160749   35922 main.go:143] libmachine: domain test-preload-923878 has defined IP address 192.168.39.20 and MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:58.160909   35922 main.go:143] libmachine: Using SSH client type: native
	I1213 09:25:58.161100   35922 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I1213 09:25:58.161110   35922 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:25:58.285152   35922 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 09:25:58.285178   35922 buildroot.go:166] provisioning hostname "test-preload-923878"
	I1213 09:25:58.287823   35922 main.go:143] libmachine: domain test-preload-923878 has defined MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:58.288216   35922 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:da:32", ip: ""} in network mk-test-preload-923878: {Iface:virbr1 ExpiryTime:2025-12-13 10:25:55 +0000 UTC Type:0 Mac:52:54:00:9c:da:32 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:test-preload-923878 Clientid:01:52:54:00:9c:da:32}
	I1213 09:25:58.288242   35922 main.go:143] libmachine: domain test-preload-923878 has defined IP address 192.168.39.20 and MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:58.288418   35922 main.go:143] libmachine: Using SSH client type: native
	I1213 09:25:58.288623   35922 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I1213 09:25:58.288634   35922 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-923878 && echo "test-preload-923878" | sudo tee /etc/hostname
	I1213 09:25:58.419147   35922 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-923878
	
	I1213 09:25:58.422420   35922 main.go:143] libmachine: domain test-preload-923878 has defined MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:58.422949   35922 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:da:32", ip: ""} in network mk-test-preload-923878: {Iface:virbr1 ExpiryTime:2025-12-13 10:25:55 +0000 UTC Type:0 Mac:52:54:00:9c:da:32 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:test-preload-923878 Clientid:01:52:54:00:9c:da:32}
	I1213 09:25:58.422979   35922 main.go:143] libmachine: domain test-preload-923878 has defined IP address 192.168.39.20 and MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:58.423242   35922 main.go:143] libmachine: Using SSH client type: native
	I1213 09:25:58.423499   35922 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I1213 09:25:58.423519   35922 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-923878' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-923878/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-923878' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:25:58.545990   35922 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:25:58.546016   35922 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22128-5761/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-5761/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-5761/.minikube}
	I1213 09:25:58.546033   35922 buildroot.go:174] setting up certificates
	I1213 09:25:58.546042   35922 provision.go:84] configureAuth start
	I1213 09:25:58.549430   35922 main.go:143] libmachine: domain test-preload-923878 has defined MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:58.549912   35922 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:da:32", ip: ""} in network mk-test-preload-923878: {Iface:virbr1 ExpiryTime:2025-12-13 10:25:55 +0000 UTC Type:0 Mac:52:54:00:9c:da:32 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:test-preload-923878 Clientid:01:52:54:00:9c:da:32}
	I1213 09:25:58.549948   35922 main.go:143] libmachine: domain test-preload-923878 has defined IP address 192.168.39.20 and MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:58.552277   35922 main.go:143] libmachine: domain test-preload-923878 has defined MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:58.552764   35922 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:da:32", ip: ""} in network mk-test-preload-923878: {Iface:virbr1 ExpiryTime:2025-12-13 10:25:55 +0000 UTC Type:0 Mac:52:54:00:9c:da:32 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:test-preload-923878 Clientid:01:52:54:00:9c:da:32}
	I1213 09:25:58.552798   35922 main.go:143] libmachine: domain test-preload-923878 has defined IP address 192.168.39.20 and MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:58.553027   35922 provision.go:143] copyHostCerts
	I1213 09:25:58.553127   35922 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5761/.minikube/ca.pem, removing ...
	I1213 09:25:58.553146   35922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5761/.minikube/ca.pem
	I1213 09:25:58.553228   35922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-5761/.minikube/ca.pem (1078 bytes)
	I1213 09:25:58.553368   35922 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5761/.minikube/cert.pem, removing ...
	I1213 09:25:58.553381   35922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5761/.minikube/cert.pem
	I1213 09:25:58.553431   35922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-5761/.minikube/cert.pem (1123 bytes)
	I1213 09:25:58.553605   35922 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5761/.minikube/key.pem, removing ...
	I1213 09:25:58.553624   35922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5761/.minikube/key.pem
	I1213 09:25:58.553679   35922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-5761/.minikube/key.pem (1679 bytes)
	I1213 09:25:58.553771   35922 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-5761/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca-key.pem org=jenkins.test-preload-923878 san=[127.0.0.1 192.168.39.20 localhost minikube test-preload-923878]
	I1213 09:25:58.585576   35922 provision.go:177] copyRemoteCerts
	I1213 09:25:58.585657   35922 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:25:58.588250   35922 main.go:143] libmachine: domain test-preload-923878 has defined MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:58.588673   35922 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:da:32", ip: ""} in network mk-test-preload-923878: {Iface:virbr1 ExpiryTime:2025-12-13 10:25:55 +0000 UTC Type:0 Mac:52:54:00:9c:da:32 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:test-preload-923878 Clientid:01:52:54:00:9c:da:32}
	I1213 09:25:58.588702   35922 main.go:143] libmachine: domain test-preload-923878 has defined IP address 192.168.39.20 and MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:58.588881   35922 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/test-preload-923878/id_rsa Username:docker}
	I1213 09:25:58.678765   35922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 09:25:58.708761   35922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1213 09:25:58.738090   35922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 09:25:58.767496   35922 provision.go:87] duration metric: took 221.4433ms to configureAuth
	I1213 09:25:58.767524   35922 buildroot.go:189] setting minikube options for container-runtime
	I1213 09:25:58.767720   35922 config.go:182] Loaded profile config "test-preload-923878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:25:58.770233   35922 main.go:143] libmachine: domain test-preload-923878 has defined MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:58.770572   35922 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:da:32", ip: ""} in network mk-test-preload-923878: {Iface:virbr1 ExpiryTime:2025-12-13 10:25:55 +0000 UTC Type:0 Mac:52:54:00:9c:da:32 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:test-preload-923878 Clientid:01:52:54:00:9c:da:32}
	I1213 09:25:58.770595   35922 main.go:143] libmachine: domain test-preload-923878 has defined IP address 192.168.39.20 and MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:58.770763   35922 main.go:143] libmachine: Using SSH client type: native
	I1213 09:25:58.770999   35922 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I1213 09:25:58.771032   35922 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 09:25:59.017869   35922 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 09:25:59.017897   35922 machine.go:97] duration metric: took 860.890287ms to provisionDockerMachine
	I1213 09:25:59.017909   35922 start.go:293] postStartSetup for "test-preload-923878" (driver="kvm2")
	I1213 09:25:59.017918   35922 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:25:59.017969   35922 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:25:59.020794   35922 main.go:143] libmachine: domain test-preload-923878 has defined MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:59.021163   35922 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:da:32", ip: ""} in network mk-test-preload-923878: {Iface:virbr1 ExpiryTime:2025-12-13 10:25:55 +0000 UTC Type:0 Mac:52:54:00:9c:da:32 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:test-preload-923878 Clientid:01:52:54:00:9c:da:32}
	I1213 09:25:59.021192   35922 main.go:143] libmachine: domain test-preload-923878 has defined IP address 192.168.39.20 and MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:59.021359   35922 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/test-preload-923878/id_rsa Username:docker}
	I1213 09:25:59.116239   35922 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:25:59.121319   35922 info.go:137] Remote host: Buildroot 2025.02
	I1213 09:25:59.121345   35922 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5761/.minikube/addons for local assets ...
	I1213 09:25:59.121402   35922 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5761/.minikube/files for local assets ...
	I1213 09:25:59.121488   35922 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-5761/.minikube/files/etc/ssl/certs/96972.pem -> 96972.pem in /etc/ssl/certs
	I1213 09:25:59.121573   35922 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:25:59.133473   35922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/files/etc/ssl/certs/96972.pem --> /etc/ssl/certs/96972.pem (1708 bytes)
	I1213 09:25:59.163947   35922 start.go:296] duration metric: took 146.02671ms for postStartSetup
	I1213 09:25:59.163986   35922 fix.go:56] duration metric: took 14.591913858s for fixHost
	I1213 09:25:59.166444   35922 main.go:143] libmachine: domain test-preload-923878 has defined MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:59.166836   35922 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:da:32", ip: ""} in network mk-test-preload-923878: {Iface:virbr1 ExpiryTime:2025-12-13 10:25:55 +0000 UTC Type:0 Mac:52:54:00:9c:da:32 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:test-preload-923878 Clientid:01:52:54:00:9c:da:32}
	I1213 09:25:59.166876   35922 main.go:143] libmachine: domain test-preload-923878 has defined IP address 192.168.39.20 and MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:59.167033   35922 main.go:143] libmachine: Using SSH client type: native
	I1213 09:25:59.167226   35922 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I1213 09:25:59.167236   35922 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1213 09:25:59.280682   35922 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765617959.240141703
	
	I1213 09:25:59.280708   35922 fix.go:216] guest clock: 1765617959.240141703
	I1213 09:25:59.280717   35922 fix.go:229] Guest: 2025-12-13 09:25:59.240141703 +0000 UTC Remote: 2025-12-13 09:25:59.163990006 +0000 UTC m=+14.697322329 (delta=76.151697ms)
	I1213 09:25:59.280737   35922 fix.go:200] guest clock delta is within tolerance: 76.151697ms
	I1213 09:25:59.280744   35922 start.go:83] releasing machines lock for "test-preload-923878", held for 14.708680748s
	I1213 09:25:59.283513   35922 main.go:143] libmachine: domain test-preload-923878 has defined MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:59.283954   35922 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:da:32", ip: ""} in network mk-test-preload-923878: {Iface:virbr1 ExpiryTime:2025-12-13 10:25:55 +0000 UTC Type:0 Mac:52:54:00:9c:da:32 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:test-preload-923878 Clientid:01:52:54:00:9c:da:32}
	I1213 09:25:59.283977   35922 main.go:143] libmachine: domain test-preload-923878 has defined IP address 192.168.39.20 and MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:59.284499   35922 ssh_runner.go:195] Run: cat /version.json
	I1213 09:25:59.284523   35922 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:25:59.287553   35922 main.go:143] libmachine: domain test-preload-923878 has defined MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:59.287617   35922 main.go:143] libmachine: domain test-preload-923878 has defined MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:59.287975   35922 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:da:32", ip: ""} in network mk-test-preload-923878: {Iface:virbr1 ExpiryTime:2025-12-13 10:25:55 +0000 UTC Type:0 Mac:52:54:00:9c:da:32 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:test-preload-923878 Clientid:01:52:54:00:9c:da:32}
	I1213 09:25:59.288004   35922 main.go:143] libmachine: domain test-preload-923878 has defined IP address 192.168.39.20 and MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:59.288008   35922 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:da:32", ip: ""} in network mk-test-preload-923878: {Iface:virbr1 ExpiryTime:2025-12-13 10:25:55 +0000 UTC Type:0 Mac:52:54:00:9c:da:32 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:test-preload-923878 Clientid:01:52:54:00:9c:da:32}
	I1213 09:25:59.288028   35922 main.go:143] libmachine: domain test-preload-923878 has defined IP address 192.168.39.20 and MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:25:59.288157   35922 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/test-preload-923878/id_rsa Username:docker}
	I1213 09:25:59.288157   35922 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/test-preload-923878/id_rsa Username:docker}
	I1213 09:25:59.403207   35922 ssh_runner.go:195] Run: systemctl --version
	I1213 09:25:59.409638   35922 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 09:25:59.556949   35922 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:25:59.564423   35922 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:25:59.564496   35922 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:25:59.584910   35922 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 09:25:59.584938   35922 start.go:496] detecting cgroup driver to use...
	I1213 09:25:59.585004   35922 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 09:25:59.604817   35922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:25:59.621998   35922 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:25:59.622057   35922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:25:59.640448   35922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:25:59.657847   35922 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:25:59.805631   35922 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:26:00.025086   35922 docker.go:234] disabling docker service ...
	I1213 09:26:00.025148   35922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:26:00.042309   35922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:26:00.057878   35922 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:26:00.213180   35922 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:26:00.354623   35922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:26:00.371063   35922 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:26:00.394505   35922 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 09:26:00.394579   35922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:26:00.407697   35922 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 09:26:00.407783   35922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:26:00.420053   35922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:26:00.432547   35922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:26:00.445611   35922 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:26:00.458984   35922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:26:00.471900   35922 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:26:00.492897   35922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:26:00.505419   35922 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:26:00.516944   35922 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 09:26:00.517002   35922 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 09:26:00.538167   35922 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:26:00.550542   35922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:26:00.689226   35922 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 09:26:00.800769   35922 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 09:26:00.800842   35922 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 09:26:00.806032   35922 start.go:564] Will wait 60s for crictl version
	I1213 09:26:00.806087   35922 ssh_runner.go:195] Run: which crictl
	I1213 09:26:00.810182   35922 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 09:26:00.844712   35922 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 09:26:00.844809   35922 ssh_runner.go:195] Run: crio --version
	I1213 09:26:00.874250   35922 ssh_runner.go:195] Run: crio --version
	I1213 09:26:00.903888   35922 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1213 09:26:00.907457   35922 main.go:143] libmachine: domain test-preload-923878 has defined MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:26:00.907841   35922 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:da:32", ip: ""} in network mk-test-preload-923878: {Iface:virbr1 ExpiryTime:2025-12-13 10:25:55 +0000 UTC Type:0 Mac:52:54:00:9c:da:32 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:test-preload-923878 Clientid:01:52:54:00:9c:da:32}
	I1213 09:26:00.907864   35922 main.go:143] libmachine: domain test-preload-923878 has defined IP address 192.168.39.20 and MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:26:00.908051   35922 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 09:26:00.912706   35922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:26:00.927431   35922 kubeadm.go:884] updating cluster {Name:test-preload-923878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.2 ClusterName:test-preload-923878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:26:00.927531   35922 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:26:00.927571   35922 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:26:00.962925   35922 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1213 09:26:00.962988   35922 ssh_runner.go:195] Run: which lz4
	I1213 09:26:00.967199   35922 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 09:26:00.971932   35922 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 09:26:00.971963   35922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1213 09:26:02.207222   35922 crio.go:462] duration metric: took 1.240051009s to copy over tarball
	I1213 09:26:02.207299   35922 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 09:26:03.681858   35922 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.474530874s)
	I1213 09:26:03.681894   35922 crio.go:469] duration metric: took 1.474644062s to extract the tarball
	I1213 09:26:03.681905   35922 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 09:26:03.719995   35922 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:26:03.762803   35922 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:26:03.762825   35922 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:26:03.762831   35922 kubeadm.go:935] updating node { 192.168.39.20 8443 v1.34.2 crio true true} ...
	I1213 09:26:03.762950   35922 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-923878 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.20
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:test-preload-923878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:26:03.763028   35922 ssh_runner.go:195] Run: crio config
	I1213 09:26:03.807379   35922 cni.go:84] Creating CNI manager for ""
	I1213 09:26:03.807403   35922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 09:26:03.807419   35922 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 09:26:03.807441   35922 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.20 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-923878 NodeName:test-preload-923878 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.20"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.20 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:26:03.807590   35922 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.20
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-923878"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.20"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.20"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:26:03.807666   35922 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 09:26:03.820070   35922 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:26:03.820138   35922 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:26:03.832032   35922 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1213 09:26:03.852720   35922 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 09:26:03.873377   35922 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1213 09:26:03.894427   35922 ssh_runner.go:195] Run: grep 192.168.39.20	control-plane.minikube.internal$ /etc/hosts
	I1213 09:26:03.898649   35922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.20	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:26:03.913016   35922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:26:04.052561   35922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:26:04.072479   35922 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/test-preload-923878 for IP: 192.168.39.20
	I1213 09:26:04.072510   35922 certs.go:195] generating shared ca certs ...
	I1213 09:26:04.072532   35922 certs.go:227] acquiring lock for ca certs: {Name:mkfb64e4be02ab559f3d464592a7c41204abf76e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:26:04.072762   35922 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-5761/.minikube/ca.key
	I1213 09:26:04.072832   35922 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-5761/.minikube/proxy-client-ca.key
	I1213 09:26:04.072849   35922 certs.go:257] generating profile certs ...
	I1213 09:26:04.072972   35922 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/test-preload-923878/client.key
	I1213 09:26:04.073076   35922 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/test-preload-923878/apiserver.key.1d921e20
	I1213 09:26:04.073135   35922 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/test-preload-923878/proxy-client.key
	I1213 09:26:04.073333   35922 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/9697.pem (1338 bytes)
	W1213 09:26:04.073392   35922 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-5761/.minikube/certs/9697_empty.pem, impossibly tiny 0 bytes
	I1213 09:26:04.073408   35922 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:26:04.073449   35922 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca.pem (1078 bytes)
	I1213 09:26:04.073492   35922 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:26:04.073530   35922 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/key.pem (1679 bytes)
	I1213 09:26:04.073598   35922 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5761/.minikube/files/etc/ssl/certs/96972.pem (1708 bytes)
	I1213 09:26:04.074573   35922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:26:04.121980   35922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 09:26:04.163104   35922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:26:04.191596   35922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 09:26:04.224853   35922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/test-preload-923878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1213 09:26:04.254472   35922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/test-preload-923878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 09:26:04.283547   35922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/test-preload-923878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:26:04.312910   35922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/test-preload-923878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 09:26:04.342321   35922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/files/etc/ssl/certs/96972.pem --> /usr/share/ca-certificates/96972.pem (1708 bytes)
	I1213 09:26:04.371713   35922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:26:04.400965   35922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/certs/9697.pem --> /usr/share/ca-certificates/9697.pem (1338 bytes)
	I1213 09:26:04.430362   35922 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:26:04.451598   35922 ssh_runner.go:195] Run: openssl version
	I1213 09:26:04.457761   35922 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:26:04.469456   35922 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:26:04.481207   35922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:26:04.486469   35922 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:26:04.486535   35922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:26:04.494201   35922 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:26:04.506256   35922 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 09:26:04.518788   35922 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9697.pem
	I1213 09:26:04.530502   35922 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9697.pem /etc/ssl/certs/9697.pem
	I1213 09:26:04.542618   35922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9697.pem
	I1213 09:26:04.547923   35922 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:42 /usr/share/ca-certificates/9697.pem
	I1213 09:26:04.547991   35922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9697.pem
	I1213 09:26:04.555492   35922 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:26:04.568020   35922 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9697.pem /etc/ssl/certs/51391683.0
	I1213 09:26:04.579944   35922 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/96972.pem
	I1213 09:26:04.591383   35922 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/96972.pem /etc/ssl/certs/96972.pem
	I1213 09:26:04.603373   35922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96972.pem
	I1213 09:26:04.608762   35922 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:42 /usr/share/ca-certificates/96972.pem
	I1213 09:26:04.608825   35922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96972.pem
	I1213 09:26:04.616149   35922 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:26:04.628183   35922 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/96972.pem /etc/ssl/certs/3ec20f2e.0
	I1213 09:26:04.640507   35922 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:26:04.646193   35922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 09:26:04.654384   35922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 09:26:04.662284   35922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 09:26:04.670000   35922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 09:26:04.678022   35922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 09:26:04.685897   35922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 09:26:04.693684   35922 kubeadm.go:401] StartCluster: {Name:test-preload-923878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.2 ClusterName:test-preload-923878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:26:04.693780   35922 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 09:26:04.693856   35922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:26:04.726785   35922 cri.go:89] found id: ""
	I1213 09:26:04.726868   35922 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:26:04.739533   35922 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 09:26:04.739557   35922 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 09:26:04.739613   35922 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 09:26:04.752196   35922 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:26:04.752635   35922 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-923878" does not appear in /home/jenkins/minikube-integration/22128-5761/kubeconfig
	I1213 09:26:04.752755   35922 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-5761/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-923878" cluster setting kubeconfig missing "test-preload-923878" context setting]
	I1213 09:26:04.753027   35922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/kubeconfig: {Name:mkf140a0b47414a2ed3efe0851d61f10012610de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:26:04.753592   35922 kapi.go:59] client config for test-preload-923878: &rest.Config{Host:"https://192.168.39.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-5761/.minikube/profiles/test-preload-923878/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-5761/.minikube/profiles/test-preload-923878/client.key", CAFile:"/home/jenkins/minikube-integration/22128-5761/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 09:26:04.753976   35922 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 09:26:04.753991   35922 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 09:26:04.753997   35922 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 09:26:04.754001   35922 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 09:26:04.754005   35922 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 09:26:04.754396   35922 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 09:26:04.770898   35922 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.20
	I1213 09:26:04.770938   35922 kubeadm.go:1161] stopping kube-system containers ...
	I1213 09:26:04.770953   35922 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 09:26:04.771014   35922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:26:04.815150   35922 cri.go:89] found id: ""
	I1213 09:26:04.815240   35922 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 09:26:04.841583   35922 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:26:04.854364   35922 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:26:04.854390   35922 kubeadm.go:158] found existing configuration files:
	
	I1213 09:26:04.854447   35922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 09:26:04.865623   35922 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:26:04.865706   35922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:26:04.877977   35922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 09:26:04.890487   35922 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:26:04.890559   35922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:26:04.902643   35922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 09:26:04.914269   35922 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:26:04.914347   35922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:26:04.927144   35922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 09:26:04.938755   35922 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:26:04.938842   35922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:26:04.950853   35922 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 09:26:04.963215   35922 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:26:05.019050   35922 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:26:07.179225   35922 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.160138169s)
	I1213 09:26:07.179331   35922 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:26:07.429862   35922 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:26:07.495467   35922 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:26:07.562620   35922 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:26:07.562717   35922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:26:08.063522   35922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:26:08.562822   35922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:26:09.063817   35922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:26:09.092191   35922 api_server.go:72] duration metric: took 1.529587424s to wait for apiserver process to appear ...
	I1213 09:26:09.092223   35922 api_server.go:88] waiting for apiserver healthz status ...
	I1213 09:26:09.092246   35922 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I1213 09:26:11.238131   35922 api_server.go:279] https://192.168.39.20:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 09:26:11.238162   35922 api_server.go:103] status: https://192.168.39.20:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 09:26:11.238177   35922 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I1213 09:26:11.288124   35922 api_server.go:279] https://192.168.39.20:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 09:26:11.288150   35922 api_server.go:103] status: https://192.168.39.20:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 09:26:11.592387   35922 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I1213 09:26:11.601488   35922 api_server.go:279] https://192.168.39.20:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:26:11.601517   35922 api_server.go:103] status: https://192.168.39.20:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:26:12.093190   35922 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I1213 09:26:12.098853   35922 api_server.go:279] https://192.168.39.20:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:26:12.098883   35922 api_server.go:103] status: https://192.168.39.20:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:26:12.592470   35922 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I1213 09:26:12.598206   35922 api_server.go:279] https://192.168.39.20:8443/healthz returned 200:
	ok
	I1213 09:26:12.607279   35922 api_server.go:141] control plane version: v1.34.2
	I1213 09:26:12.607316   35922 api_server.go:131] duration metric: took 3.515085512s to wait for apiserver health ...
	I1213 09:26:12.607325   35922 cni.go:84] Creating CNI manager for ""
	I1213 09:26:12.607333   35922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 09:26:12.609165   35922 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 09:26:12.610403   35922 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 09:26:12.622742   35922 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 09:26:12.645562   35922 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 09:26:12.651845   35922 system_pods.go:59] 7 kube-system pods found
	I1213 09:26:12.651890   35922 system_pods.go:61] "coredns-66bc5c9577-s9hrv" [ed441581-eded-48c6-ad07-2dea59d9b038] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:26:12.651902   35922 system_pods.go:61] "etcd-test-preload-923878" [57057246-b5ae-498e-be1f-e43785364e98] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:26:12.651913   35922 system_pods.go:61] "kube-apiserver-test-preload-923878" [2bd610fd-870d-4675-b9e8-043b99f198ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 09:26:12.651922   35922 system_pods.go:61] "kube-controller-manager-test-preload-923878" [465c661a-8f1b-4c2f-bc93-b43c8b5c3cf2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 09:26:12.651932   35922 system_pods.go:61] "kube-proxy-s76lg" [b409d074-26cf-41d8-8711-26673b2a0e9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 09:26:12.651941   35922 system_pods.go:61] "kube-scheduler-test-preload-923878" [920a7a76-75cc-4fff-a546-9182d6f1abb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:26:12.651951   35922 system_pods.go:61] "storage-provisioner" [bd2c0930-c1b8-48aa-ae2e-0cc3acb529a1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:26:12.651961   35922 system_pods.go:74] duration metric: took 6.377083ms to wait for pod list to return data ...
	I1213 09:26:12.651975   35922 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:26:12.656909   35922 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 09:26:12.656952   35922 node_conditions.go:123] node cpu capacity is 2
	I1213 09:26:12.656969   35922 node_conditions.go:105] duration metric: took 4.988193ms to run NodePressure ...
	I1213 09:26:12.657032   35922 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:26:12.931029   35922 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1213 09:26:12.934966   35922 kubeadm.go:744] kubelet initialised
	I1213 09:26:12.934986   35922 kubeadm.go:745] duration metric: took 3.931563ms waiting for restarted kubelet to initialise ...
	I1213 09:26:12.935002   35922 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 09:26:12.950175   35922 ops.go:34] apiserver oom_adj: -16
	I1213 09:26:12.950201   35922 kubeadm.go:602] duration metric: took 8.210636668s to restartPrimaryControlPlane
	I1213 09:26:12.950213   35922 kubeadm.go:403] duration metric: took 8.256539389s to StartCluster
	I1213 09:26:12.950229   35922 settings.go:142] acquiring lock: {Name:mk0e8a3f7580725c20103c6ec548a6aa0dd069a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:26:12.950328   35922 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-5761/kubeconfig
	I1213 09:26:12.950891   35922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/kubeconfig: {Name:mkf140a0b47414a2ed3efe0851d61f10012610de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:26:12.951155   35922 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 09:26:12.951254   35922 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 09:26:12.951358   35922 config.go:182] Loaded profile config "test-preload-923878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:26:12.951372   35922 addons.go:70] Setting storage-provisioner=true in profile "test-preload-923878"
	I1213 09:26:12.951394   35922 addons.go:239] Setting addon storage-provisioner=true in "test-preload-923878"
	W1213 09:26:12.951406   35922 addons.go:248] addon storage-provisioner should already be in state true
	I1213 09:26:12.951406   35922 addons.go:70] Setting default-storageclass=true in profile "test-preload-923878"
	I1213 09:26:12.951436   35922 host.go:66] Checking if "test-preload-923878" exists ...
	I1213 09:26:12.951442   35922 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-923878"
	I1213 09:26:12.952927   35922 out.go:179] * Verifying Kubernetes components...
	I1213 09:26:12.953684   35922 kapi.go:59] client config for test-preload-923878: &rest.Config{Host:"https://192.168.39.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-5761/.minikube/profiles/test-preload-923878/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-5761/.minikube/profiles/test-preload-923878/client.key", CAFile:"/home/jenkins/minikube-integration/22128-5761/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 09:26:12.953937   35922 addons.go:239] Setting addon default-storageclass=true in "test-preload-923878"
	W1213 09:26:12.953954   35922 addons.go:248] addon default-storageclass should already be in state true
	I1213 09:26:12.953977   35922 host.go:66] Checking if "test-preload-923878" exists ...
	I1213 09:26:12.954380   35922 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:26:12.954413   35922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:26:12.955493   35922 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 09:26:12.955520   35922 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 09:26:12.955559   35922 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:26:12.955572   35922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 09:26:12.958792   35922 main.go:143] libmachine: domain test-preload-923878 has defined MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:26:12.958918   35922 main.go:143] libmachine: domain test-preload-923878 has defined MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:26:12.959410   35922 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:da:32", ip: ""} in network mk-test-preload-923878: {Iface:virbr1 ExpiryTime:2025-12-13 10:25:55 +0000 UTC Type:0 Mac:52:54:00:9c:da:32 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:test-preload-923878 Clientid:01:52:54:00:9c:da:32}
	I1213 09:26:12.959445   35922 main.go:143] libmachine: domain test-preload-923878 has defined IP address 192.168.39.20 and MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:26:12.959495   35922 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:da:32", ip: ""} in network mk-test-preload-923878: {Iface:virbr1 ExpiryTime:2025-12-13 10:25:55 +0000 UTC Type:0 Mac:52:54:00:9c:da:32 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:test-preload-923878 Clientid:01:52:54:00:9c:da:32}
	I1213 09:26:12.959528   35922 main.go:143] libmachine: domain test-preload-923878 has defined IP address 192.168.39.20 and MAC address 52:54:00:9c:da:32 in network mk-test-preload-923878
	I1213 09:26:12.959674   35922 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/test-preload-923878/id_rsa Username:docker}
	I1213 09:26:12.959812   35922 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/test-preload-923878/id_rsa Username:docker}
	I1213 09:26:13.154961   35922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:26:13.176741   35922 node_ready.go:35] waiting up to 6m0s for node "test-preload-923878" to be "Ready" ...
	I1213 09:26:13.222674   35922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 09:26:13.359330   35922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:26:14.017015   35922 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1213 09:26:14.018381   35922 addons.go:530] duration metric: took 1.067148175s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1213 09:26:15.181768   35922 node_ready.go:57] node "test-preload-923878" has "Ready":"False" status (will retry)
	W1213 09:26:17.680860   35922 node_ready.go:57] node "test-preload-923878" has "Ready":"False" status (will retry)
	W1213 09:26:19.681316   35922 node_ready.go:57] node "test-preload-923878" has "Ready":"False" status (will retry)
	I1213 09:26:21.681830   35922 node_ready.go:49] node "test-preload-923878" is "Ready"
	I1213 09:26:21.681857   35922 node_ready.go:38] duration metric: took 8.505079788s for node "test-preload-923878" to be "Ready" ...
	I1213 09:26:21.681873   35922 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:26:21.681927   35922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:26:21.713041   35922 api_server.go:72] duration metric: took 8.761850189s to wait for apiserver process to appear ...
	I1213 09:26:21.713071   35922 api_server.go:88] waiting for apiserver healthz status ...
	I1213 09:26:21.713091   35922 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I1213 09:26:21.717716   35922 api_server.go:279] https://192.168.39.20:8443/healthz returned 200:
	ok
	I1213 09:26:21.718718   35922 api_server.go:141] control plane version: v1.34.2
	I1213 09:26:21.718737   35922 api_server.go:131] duration metric: took 5.660083ms to wait for apiserver health ...
	I1213 09:26:21.718746   35922 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 09:26:21.721864   35922 system_pods.go:59] 7 kube-system pods found
	I1213 09:26:21.721888   35922 system_pods.go:61] "coredns-66bc5c9577-s9hrv" [ed441581-eded-48c6-ad07-2dea59d9b038] Running
	I1213 09:26:21.721896   35922 system_pods.go:61] "etcd-test-preload-923878" [57057246-b5ae-498e-be1f-e43785364e98] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:26:21.721901   35922 system_pods.go:61] "kube-apiserver-test-preload-923878" [2bd610fd-870d-4675-b9e8-043b99f198ea] Running
	I1213 09:26:21.721907   35922 system_pods.go:61] "kube-controller-manager-test-preload-923878" [465c661a-8f1b-4c2f-bc93-b43c8b5c3cf2] Running
	I1213 09:26:21.721911   35922 system_pods.go:61] "kube-proxy-s76lg" [b409d074-26cf-41d8-8711-26673b2a0e9d] Running
	I1213 09:26:21.721916   35922 system_pods.go:61] "kube-scheduler-test-preload-923878" [920a7a76-75cc-4fff-a546-9182d6f1abb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:26:21.721920   35922 system_pods.go:61] "storage-provisioner" [bd2c0930-c1b8-48aa-ae2e-0cc3acb529a1] Running
	I1213 09:26:21.721926   35922 system_pods.go:74] duration metric: took 3.174667ms to wait for pod list to return data ...
	I1213 09:26:21.721932   35922 default_sa.go:34] waiting for default service account to be created ...
	I1213 09:26:21.724814   35922 default_sa.go:45] found service account: "default"
	I1213 09:26:21.724841   35922 default_sa.go:55] duration metric: took 2.901397ms for default service account to be created ...
	I1213 09:26:21.724849   35922 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 09:26:21.727427   35922 system_pods.go:86] 7 kube-system pods found
	I1213 09:26:21.727449   35922 system_pods.go:89] "coredns-66bc5c9577-s9hrv" [ed441581-eded-48c6-ad07-2dea59d9b038] Running
	I1213 09:26:21.727457   35922 system_pods.go:89] "etcd-test-preload-923878" [57057246-b5ae-498e-be1f-e43785364e98] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:26:21.727461   35922 system_pods.go:89] "kube-apiserver-test-preload-923878" [2bd610fd-870d-4675-b9e8-043b99f198ea] Running
	I1213 09:26:21.727467   35922 system_pods.go:89] "kube-controller-manager-test-preload-923878" [465c661a-8f1b-4c2f-bc93-b43c8b5c3cf2] Running
	I1213 09:26:21.727471   35922 system_pods.go:89] "kube-proxy-s76lg" [b409d074-26cf-41d8-8711-26673b2a0e9d] Running
	I1213 09:26:21.727476   35922 system_pods.go:89] "kube-scheduler-test-preload-923878" [920a7a76-75cc-4fff-a546-9182d6f1abb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:26:21.727480   35922 system_pods.go:89] "storage-provisioner" [bd2c0930-c1b8-48aa-ae2e-0cc3acb529a1] Running
	I1213 09:26:21.727485   35922 system_pods.go:126] duration metric: took 2.632802ms to wait for k8s-apps to be running ...
	I1213 09:26:21.727491   35922 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 09:26:21.727537   35922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:26:21.744144   35922 system_svc.go:56] duration metric: took 16.642456ms WaitForService to wait for kubelet
	I1213 09:26:21.744169   35922 kubeadm.go:587] duration metric: took 8.792982413s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:26:21.744186   35922 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:26:21.747851   35922 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 09:26:21.747870   35922 node_conditions.go:123] node cpu capacity is 2
	I1213 09:26:21.747879   35922 node_conditions.go:105] duration metric: took 3.689996ms to run NodePressure ...
	I1213 09:26:21.747890   35922 start.go:242] waiting for startup goroutines ...
	I1213 09:26:21.747900   35922 start.go:247] waiting for cluster config update ...
	I1213 09:26:21.747913   35922 start.go:256] writing updated cluster config ...
	I1213 09:26:21.748203   35922 ssh_runner.go:195] Run: rm -f paused
	I1213 09:26:21.754668   35922 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:26:21.755081   35922 kapi.go:59] client config for test-preload-923878: &rest.Config{Host:"https://192.168.39.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-5761/.minikube/profiles/test-preload-923878/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-5761/.minikube/profiles/test-preload-923878/client.key", CAFile:"/home/jenkins/minikube-integration/22128-5761/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 09:26:21.759910   35922 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s9hrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:26:21.771862   35922 pod_ready.go:94] pod "coredns-66bc5c9577-s9hrv" is "Ready"
	I1213 09:26:21.771889   35922 pod_ready.go:86] duration metric: took 11.958893ms for pod "coredns-66bc5c9577-s9hrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:26:21.774445   35922 pod_ready.go:83] waiting for pod "etcd-test-preload-923878" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 09:26:23.780591   35922 pod_ready.go:104] pod "etcd-test-preload-923878" is not "Ready", error: <nil>
	I1213 09:26:24.280876   35922 pod_ready.go:94] pod "etcd-test-preload-923878" is "Ready"
	I1213 09:26:24.280900   35922 pod_ready.go:86] duration metric: took 2.506435381s for pod "etcd-test-preload-923878" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:26:24.282961   35922 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-923878" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:26:24.287436   35922 pod_ready.go:94] pod "kube-apiserver-test-preload-923878" is "Ready"
	I1213 09:26:24.287457   35922 pod_ready.go:86] duration metric: took 4.476285ms for pod "kube-apiserver-test-preload-923878" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:26:24.289782   35922 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-923878" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:26:24.294007   35922 pod_ready.go:94] pod "kube-controller-manager-test-preload-923878" is "Ready"
	I1213 09:26:24.294027   35922 pod_ready.go:86] duration metric: took 4.2275ms for pod "kube-controller-manager-test-preload-923878" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:26:24.359169   35922 pod_ready.go:83] waiting for pod "kube-proxy-s76lg" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:26:24.762689   35922 pod_ready.go:94] pod "kube-proxy-s76lg" is "Ready"
	I1213 09:26:24.762713   35922 pod_ready.go:86] duration metric: took 403.519281ms for pod "kube-proxy-s76lg" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:26:24.959861   35922 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-923878" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:26:25.758896   35922 pod_ready.go:94] pod "kube-scheduler-test-preload-923878" is "Ready"
	I1213 09:26:25.758925   35922 pod_ready.go:86] duration metric: took 799.04013ms for pod "kube-scheduler-test-preload-923878" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:26:25.758940   35922 pod_ready.go:40] duration metric: took 4.00423238s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:26:25.801689   35922 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 09:26:25.803738   35922 out.go:179] * Done! kubectl is now configured to use "test-preload-923878" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.577613625Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765617986577586734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a5090dd-7cef-4327-83f1-e355c0a3a068 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.578469435Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bee72d8a-e6d4-4b39-99f3-9b9a3fe7bb83 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.578590254Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bee72d8a-e6d4-4b39-99f3-9b9a3fe7bb83 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.578782419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa70994103c80fdc62efc823935ddf0456e79bc46bb25ae2860fe389a8ee9f88,PodSandboxId:a8e9bc96ae9cd0db8cf66720bda171583ed5d24740a8f687846a6b1a88d471d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765617979604755778,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-s9hrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed441581-eded-48c6-ad07-2dea59d9b038,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88230e71e5a5baf70de3cbcd710f758da8608b08ab53a3101b74dfa068ec98d8,PodSandboxId:84f6ef92daca6dddfaea4bce13c47f0e457340878af8606e532bd0a8d71b05cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765617971957820595,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s76lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b409d074-26cf-41d8-8711-26673b2a0e9d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:763c2715f29301ff20d1bbbdc7e52b1724db2c64731372d9340a4302cdaafea6,PodSandboxId:e465208d4ffbff87128c65e3659aceb4b8c8892f70b20241db983656b8b6f90c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765617971919460304,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2c0930-c1b8-48aa-ae2e-0cc3acb529a1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c07767fbb9a47a63c4b13d0df94048bc4f18812d06f6b726efa34ad44b4f6a,PodSandboxId:3c30f67af58e82d23462020742d9fda495fe2faf0d60078b5416526d79512c90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765617968422973606,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-923878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a953a984552e6e1245762d205fa51e1a,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f03176a883ec125bc0b2a68c912a71e481a85bf2b41dbd786959627cd243daa,PodSandboxId:345eb2a5966bed7c712667c541c53d2ff2f7d8afdf5b46163d0e2a40d03cab4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ff
b85,State:CONTAINER_RUNNING,CreatedAt:1765617968407871678,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-923878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4581fabf77e666ff9c164c7a6bbde8f5,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b93e27861e3fd4b48167cfdd1792abc9c1471a69c2b02de3be40426cd7824f8e,PodSandboxId:7eccdbf4c72eff379a82b8bf617ed7d2cc58c7d09f5f038361a80e2a5375a47d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765617968398114201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-923878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c9f6fdc9b2c1365990b37a3342f878,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12bb60f6eed66064b5d25203ea9ed5780de86f57ec4decfd89a97e6cf8d00320,PodSandboxId:de3b033ae68ec0e9d07183ed816bab28a63731674debee550823f3453508abad,Metadata:&ContainerMetadata{Name:etcd,A
ttempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765617968338424040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-923878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a30210880dacdf79a50ba25e128e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bee72d8a-e6d4-4b39-99f3-9b9a3fe7bb83 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.611310565Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a5dc8ba-a7e7-4ff7-ab3b-446aafab8a63 name=/runtime.v1.RuntimeService/Version
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.611387046Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a5dc8ba-a7e7-4ff7-ab3b-446aafab8a63 name=/runtime.v1.RuntimeService/Version
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.612680822Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=881d8eac-661b-4a3f-a2b6-cd70bb40c3c2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.613058742Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765617986613038151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=881d8eac-661b-4a3f-a2b6-cd70bb40c3c2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.613784998Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58f43abe-7c9d-4da6-9cae-7e9ce5fac971 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.613900315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58f43abe-7c9d-4da6-9cae-7e9ce5fac971 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.614234216Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa70994103c80fdc62efc823935ddf0456e79bc46bb25ae2860fe389a8ee9f88,PodSandboxId:a8e9bc96ae9cd0db8cf66720bda171583ed5d24740a8f687846a6b1a88d471d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765617979604755778,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-s9hrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed441581-eded-48c6-ad07-2dea59d9b038,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88230e71e5a5baf70de3cbcd710f758da8608b08ab53a3101b74dfa068ec98d8,PodSandboxId:84f6ef92daca6dddfaea4bce13c47f0e457340878af8606e532bd0a8d71b05cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765617971957820595,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s76lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b409d074-26cf-41d8-8711-26673b2a0e9d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:763c2715f29301ff20d1bbbdc7e52b1724db2c64731372d9340a4302cdaafea6,PodSandboxId:e465208d4ffbff87128c65e3659aceb4b8c8892f70b20241db983656b8b6f90c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765617971919460304,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2c0930-c1b8-48aa-ae2e-0cc3acb529a1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c07767fbb9a47a63c4b13d0df94048bc4f18812d06f6b726efa34ad44b4f6a,PodSandboxId:3c30f67af58e82d23462020742d9fda495fe2faf0d60078b5416526d79512c90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765617968422973606,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-923878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a953a984552e6e1245762d205fa51e1a,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f03176a883ec125bc0b2a68c912a71e481a85bf2b41dbd786959627cd243daa,PodSandboxId:345eb2a5966bed7c712667c541c53d2ff2f7d8afdf5b46163d0e2a40d03cab4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ff
b85,State:CONTAINER_RUNNING,CreatedAt:1765617968407871678,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-923878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4581fabf77e666ff9c164c7a6bbde8f5,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b93e27861e3fd4b48167cfdd1792abc9c1471a69c2b02de3be40426cd7824f8e,PodSandboxId:7eccdbf4c72eff379a82b8bf617ed7d2cc58c7d09f5f038361a80e2a5375a47d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765617968398114201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-923878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c9f6fdc9b2c1365990b37a3342f878,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12bb60f6eed66064b5d25203ea9ed5780de86f57ec4decfd89a97e6cf8d00320,PodSandboxId:de3b033ae68ec0e9d07183ed816bab28a63731674debee550823f3453508abad,Metadata:&ContainerMetadata{Name:etcd,A
ttempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765617968338424040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-923878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a30210880dacdf79a50ba25e128e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58f43abe-7c9d-4da6-9cae-7e9ce5fac971 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.647137635Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=94252aa8-48ee-4ee3-9048-2f12eb1d43e1 name=/runtime.v1.RuntimeService/Version
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.647636137Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=94252aa8-48ee-4ee3-9048-2f12eb1d43e1 name=/runtime.v1.RuntimeService/Version
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.649217146Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=54a01be1-d8c4-43f1-9d4d-2262e5aaa3fc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.650086490Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765617986650012048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54a01be1-d8c4-43f1-9d4d-2262e5aaa3fc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.651084187Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3cf59564-0f27-45f9-9e26-af7ec14ff3bd name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.651276970Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3cf59564-0f27-45f9-9e26-af7ec14ff3bd name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.651485737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa70994103c80fdc62efc823935ddf0456e79bc46bb25ae2860fe389a8ee9f88,PodSandboxId:a8e9bc96ae9cd0db8cf66720bda171583ed5d24740a8f687846a6b1a88d471d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765617979604755778,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-s9hrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed441581-eded-48c6-ad07-2dea59d9b038,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88230e71e5a5baf70de3cbcd710f758da8608b08ab53a3101b74dfa068ec98d8,PodSandboxId:84f6ef92daca6dddfaea4bce13c47f0e457340878af8606e532bd0a8d71b05cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765617971957820595,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s76lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b409d074-26cf-41d8-8711-26673b2a0e9d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:763c2715f29301ff20d1bbbdc7e52b1724db2c64731372d9340a4302cdaafea6,PodSandboxId:e465208d4ffbff87128c65e3659aceb4b8c8892f70b20241db983656b8b6f90c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765617971919460304,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2c0930-c1b8-48aa-ae2e-0cc3acb529a1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c07767fbb9a47a63c4b13d0df94048bc4f18812d06f6b726efa34ad44b4f6a,PodSandboxId:3c30f67af58e82d23462020742d9fda495fe2faf0d60078b5416526d79512c90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765617968422973606,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-923878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a953a984552e6e1245762d205fa51e1a,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f03176a883ec125bc0b2a68c912a71e481a85bf2b41dbd786959627cd243daa,PodSandboxId:345eb2a5966bed7c712667c541c53d2ff2f7d8afdf5b46163d0e2a40d03cab4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ff
b85,State:CONTAINER_RUNNING,CreatedAt:1765617968407871678,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-923878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4581fabf77e666ff9c164c7a6bbde8f5,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b93e27861e3fd4b48167cfdd1792abc9c1471a69c2b02de3be40426cd7824f8e,PodSandboxId:7eccdbf4c72eff379a82b8bf617ed7d2cc58c7d09f5f038361a80e2a5375a47d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765617968398114201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-923878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c9f6fdc9b2c1365990b37a3342f878,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12bb60f6eed66064b5d25203ea9ed5780de86f57ec4decfd89a97e6cf8d00320,PodSandboxId:de3b033ae68ec0e9d07183ed816bab28a63731674debee550823f3453508abad,Metadata:&ContainerMetadata{Name:etcd,A
ttempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765617968338424040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-923878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a30210880dacdf79a50ba25e128e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3cf59564-0f27-45f9-9e26-af7ec14ff3bd name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.680923956Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dcea4b14-30be-48b7-8bb9-5e1fdfa62325 name=/runtime.v1.RuntimeService/Version
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.681010323Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dcea4b14-30be-48b7-8bb9-5e1fdfa62325 name=/runtime.v1.RuntimeService/Version
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.682778851Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e01bc529-9f15-4157-8521-3cb21cfa544b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.683215870Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765617986683168860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e01bc529-9f15-4157-8521-3cb21cfa544b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.684326430Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a37d795c-957b-43cf-b1d3-bac965b9cb76 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.684395726Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a37d795c-957b-43cf-b1d3-bac965b9cb76 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:26:26 test-preload-923878 crio[839]: time="2025-12-13 09:26:26.684618301Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa70994103c80fdc62efc823935ddf0456e79bc46bb25ae2860fe389a8ee9f88,PodSandboxId:a8e9bc96ae9cd0db8cf66720bda171583ed5d24740a8f687846a6b1a88d471d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765617979604755778,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-s9hrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed441581-eded-48c6-ad07-2dea59d9b038,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88230e71e5a5baf70de3cbcd710f758da8608b08ab53a3101b74dfa068ec98d8,PodSandboxId:84f6ef92daca6dddfaea4bce13c47f0e457340878af8606e532bd0a8d71b05cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765617971957820595,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s76lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b409d074-26cf-41d8-8711-26673b2a0e9d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:763c2715f29301ff20d1bbbdc7e52b1724db2c64731372d9340a4302cdaafea6,PodSandboxId:e465208d4ffbff87128c65e3659aceb4b8c8892f70b20241db983656b8b6f90c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765617971919460304,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2c0930-c1b8-48aa-ae2e-0cc3acb529a1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c07767fbb9a47a63c4b13d0df94048bc4f18812d06f6b726efa34ad44b4f6a,PodSandboxId:3c30f67af58e82d23462020742d9fda495fe2faf0d60078b5416526d79512c90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765617968422973606,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-923878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a953a984552e6e1245762d205fa51e1a,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f03176a883ec125bc0b2a68c912a71e481a85bf2b41dbd786959627cd243daa,PodSandboxId:345eb2a5966bed7c712667c541c53d2ff2f7d8afdf5b46163d0e2a40d03cab4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ff
b85,State:CONTAINER_RUNNING,CreatedAt:1765617968407871678,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-923878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4581fabf77e666ff9c164c7a6bbde8f5,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b93e27861e3fd4b48167cfdd1792abc9c1471a69c2b02de3be40426cd7824f8e,PodSandboxId:7eccdbf4c72eff379a82b8bf617ed7d2cc58c7d09f5f038361a80e2a5375a47d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765617968398114201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-923878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c9f6fdc9b2c1365990b37a3342f878,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12bb60f6eed66064b5d25203ea9ed5780de86f57ec4decfd89a97e6cf8d00320,PodSandboxId:de3b033ae68ec0e9d07183ed816bab28a63731674debee550823f3453508abad,Metadata:&ContainerMetadata{Name:etcd,A
ttempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765617968338424040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-923878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a30210880dacdf79a50ba25e128e80,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a37d795c-957b-43cf-b1d3-bac965b9cb76 name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	fa70994103c80       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   7 seconds ago       Running             coredns                   1                   a8e9bc96ae9cd       coredns-66bc5c9577-s9hrv                      kube-system
	88230e71e5a5b       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   14 seconds ago      Running             kube-proxy                1                   84f6ef92daca6       kube-proxy-s76lg                              kube-system
	763c2715f2930       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   e465208d4ffbf       storage-provisioner                           kube-system
	19c07767fbb9a       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   18 seconds ago      Running             kube-scheduler            1                   3c30f67af58e8       kube-scheduler-test-preload-923878            kube-system
	2f03176a883ec       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   18 seconds ago      Running             kube-apiserver            1                   345eb2a5966be       kube-apiserver-test-preload-923878            kube-system
	b93e27861e3fd       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   18 seconds ago      Running             kube-controller-manager   1                   7eccdbf4c72ef       kube-controller-manager-test-preload-923878   kube-system
	12bb60f6eed66       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   18 seconds ago      Running             etcd                      1                   de3b033ae68ec       etcd-test-preload-923878                      kube-system
	
	
	==> coredns [fa70994103c80fdc62efc823935ddf0456e79bc46bb25ae2860fe389a8ee9f88] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45179 - 17403 "HINFO IN 4384559701919858938.3265353588811804515. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.070682866s
	
	
	==> describe nodes <==
	Name:               test-preload-923878
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-923878
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=test-preload-923878
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_24_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:24:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-923878
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:26:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:26:21 +0000   Sat, 13 Dec 2025 09:24:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:26:21 +0000   Sat, 13 Dec 2025 09:24:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:26:21 +0000   Sat, 13 Dec 2025 09:24:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:26:21 +0000   Sat, 13 Dec 2025 09:26:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.20
	  Hostname:    test-preload-923878
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 22a1879ad4fc4e7480a5796edc20d845
	  System UUID:                22a1879a-d4fc-4e74-80a5-796edc20d845
	  Boot ID:                    b3562fa4-c121-4900-80c7-74985ae93663
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-s9hrv                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     92s
	  kube-system                 etcd-test-preload-923878                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         98s
	  kube-system                 kube-apiserver-test-preload-923878             250m (12%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-controller-manager-test-preload-923878    200m (10%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-proxy-s76lg                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-scheduler-test-preload-923878             100m (5%)     0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 91s                  kube-proxy       
	  Normal   Starting                 14s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  105s (x8 over 105s)  kubelet          Node test-preload-923878 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    105s (x8 over 105s)  kubelet          Node test-preload-923878 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     105s (x7 over 105s)  kubelet          Node test-preload-923878 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     98s                  kubelet          Node test-preload-923878 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  98s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  98s                  kubelet          Node test-preload-923878 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    98s                  kubelet          Node test-preload-923878 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 98s                  kubelet          Starting kubelet.
	  Normal   NodeReady                97s                  kubelet          Node test-preload-923878 status is now: NodeReady
	  Normal   RegisteredNode           94s                  node-controller  Node test-preload-923878 event: Registered Node test-preload-923878 in Controller
	  Normal   Starting                 19s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  19s (x8 over 19s)    kubelet          Node test-preload-923878 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19s (x8 over 19s)    kubelet          Node test-preload-923878 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19s (x7 over 19s)    kubelet          Node test-preload-923878 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  19s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 15s                  kubelet          Node test-preload-923878 has been rebooted, boot id: b3562fa4-c121-4900-80c7-74985ae93663
	  Normal   RegisteredNode           12s                  node-controller  Node test-preload-923878 event: Registered Node test-preload-923878 in Controller
	
	
	==> dmesg <==
	[Dec13 09:25] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001094] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003017] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.953844] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec13 09:26] kauditd_printk_skb: 88 callbacks suppressed
	[  +4.543959] kauditd_printk_skb: 196 callbacks suppressed
	[  +0.000035] kauditd_printk_skb: 128 callbacks suppressed
	
	
	==> etcd [12bb60f6eed66064b5d25203ea9ed5780de86f57ec4decfd89a97e6cf8d00320] <==
	{"level":"warn","ts":"2025-12-13T09:26:10.277034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.299363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.312700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.353425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.370709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.384760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.404487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.421850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.427797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.439681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.448503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.464151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.479610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.487713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.502943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.509816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.521826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.535121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.543360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.550700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.558077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.574244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.581425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.589516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:26:10.668401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44928","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:26:26 up 0 min,  0 users,  load average: 0.65, 0.17, 0.06
	Linux test-preload-923878 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [2f03176a883ec125bc0b2a68c912a71e481a85bf2b41dbd786959627cd243daa] <==
	I1213 09:26:11.287613       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 09:26:11.289306       1 aggregator.go:171] initial CRD sync complete...
	I1213 09:26:11.289335       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 09:26:11.289341       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 09:26:11.289346       1 cache.go:39] Caches are synced for autoregister controller
	I1213 09:26:11.290206       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 09:26:11.303061       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1213 09:26:11.303198       1 policy_source.go:240] refreshing policies
	I1213 09:26:11.303327       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 09:26:11.303348       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 09:26:11.303673       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1213 09:26:11.315271       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:26:11.319033       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 09:26:11.319126       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 09:26:11.332085       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:26:11.338732       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 09:26:11.535916       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 09:26:12.181353       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 09:26:12.763491       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:26:12.810121       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 09:26:12.839462       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:26:12.845951       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:26:14.719849       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 09:26:14.918867       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:26:15.168353       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b93e27861e3fd4b48167cfdd1792abc9c1471a69c2b02de3be40426cd7824f8e] <==
	I1213 09:26:14.667134       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1213 09:26:14.667152       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1213 09:26:14.667268       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 09:26:14.668219       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1213 09:26:14.668502       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 09:26:14.668653       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1213 09:26:14.672707       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 09:26:14.672882       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 09:26:14.674998       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:26:14.676371       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 09:26:14.679624       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 09:26:14.682979       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1213 09:26:14.685098       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 09:26:14.690519       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 09:26:14.698919       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1213 09:26:14.698977       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1213 09:26:14.698976       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:26:14.699047       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1213 09:26:14.699054       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1213 09:26:14.699059       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1213 09:26:14.700142       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:26:14.700182       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 09:26:14.700190       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 09:26:14.711433       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 09:26:24.661608       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [88230e71e5a5baf70de3cbcd710f758da8608b08ab53a3101b74dfa068ec98d8] <==
	I1213 09:26:12.157663       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 09:26:12.260045       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 09:26:12.260084       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.20"]
	E1213 09:26:12.260172       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:26:12.360478       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 09:26:12.360628       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 09:26:12.360738       1 server_linux.go:132] "Using iptables Proxier"
	I1213 09:26:12.381476       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:26:12.383371       1 server.go:527] "Version info" version="v1.34.2"
	I1213 09:26:12.383485       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:26:12.392390       1 config.go:200] "Starting service config controller"
	I1213 09:26:12.392608       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:26:12.392639       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:26:12.392643       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:26:12.392657       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:26:12.392660       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:26:12.393702       1 config.go:309] "Starting node config controller"
	I1213 09:26:12.394317       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:26:12.493334       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:26:12.493383       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 09:26:12.493691       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:26:12.494869       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [19c07767fbb9a47a63c4b13d0df94048bc4f18812d06f6b726efa34ad44b4f6a] <==
	I1213 09:26:10.344003       1 serving.go:386] Generated self-signed cert in-memory
	W1213 09:26:11.222097       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 09:26:11.223607       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 09:26:11.223670       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 09:26:11.223690       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 09:26:11.309669       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 09:26:11.309764       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:26:11.321733       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:26:11.322258       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:26:11.325636       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 09:26:11.325729       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 09:26:11.423167       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 09:26:11 test-preload-923878 kubelet[1193]: I1213 09:26:11.399647    1193 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-923878"
	Dec 13 09:26:11 test-preload-923878 kubelet[1193]: E1213 09:26:11.409909    1193 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-test-preload-923878\" already exists" pod="kube-system/etcd-test-preload-923878"
	Dec 13 09:26:11 test-preload-923878 kubelet[1193]: I1213 09:26:11.477184    1193 apiserver.go:52] "Watching apiserver"
	Dec 13 09:26:11 test-preload-923878 kubelet[1193]: E1213 09:26:11.483874    1193 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-s9hrv" podUID="ed441581-eded-48c6-ad07-2dea59d9b038"
	Dec 13 09:26:11 test-preload-923878 kubelet[1193]: I1213 09:26:11.520197    1193 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 13 09:26:11 test-preload-923878 kubelet[1193]: I1213 09:26:11.523578    1193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bd2c0930-c1b8-48aa-ae2e-0cc3acb529a1-tmp\") pod \"storage-provisioner\" (UID: \"bd2c0930-c1b8-48aa-ae2e-0cc3acb529a1\") " pod="kube-system/storage-provisioner"
	Dec 13 09:26:11 test-preload-923878 kubelet[1193]: I1213 09:26:11.523655    1193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b409d074-26cf-41d8-8711-26673b2a0e9d-xtables-lock\") pod \"kube-proxy-s76lg\" (UID: \"b409d074-26cf-41d8-8711-26673b2a0e9d\") " pod="kube-system/kube-proxy-s76lg"
	Dec 13 09:26:11 test-preload-923878 kubelet[1193]: I1213 09:26:11.523675    1193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b409d074-26cf-41d8-8711-26673b2a0e9d-lib-modules\") pod \"kube-proxy-s76lg\" (UID: \"b409d074-26cf-41d8-8711-26673b2a0e9d\") " pod="kube-system/kube-proxy-s76lg"
	Dec 13 09:26:11 test-preload-923878 kubelet[1193]: E1213 09:26:11.524058    1193 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 13 09:26:11 test-preload-923878 kubelet[1193]: E1213 09:26:11.524168    1193 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed441581-eded-48c6-ad07-2dea59d9b038-config-volume podName:ed441581-eded-48c6-ad07-2dea59d9b038 nodeName:}" failed. No retries permitted until 2025-12-13 09:26:12.024148529 +0000 UTC m=+4.624712827 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ed441581-eded-48c6-ad07-2dea59d9b038-config-volume") pod "coredns-66bc5c9577-s9hrv" (UID: "ed441581-eded-48c6-ad07-2dea59d9b038") : object "kube-system"/"coredns" not registered
	Dec 13 09:26:11 test-preload-923878 kubelet[1193]: I1213 09:26:11.636226    1193 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-923878"
	Dec 13 09:26:11 test-preload-923878 kubelet[1193]: I1213 09:26:11.636687    1193 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-923878"
	Dec 13 09:26:11 test-preload-923878 kubelet[1193]: E1213 09:26:11.663695    1193 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-923878\" already exists" pod="kube-system/kube-scheduler-test-preload-923878"
	Dec 13 09:26:11 test-preload-923878 kubelet[1193]: E1213 09:26:11.665826    1193 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-923878\" already exists" pod="kube-system/kube-apiserver-test-preload-923878"
	Dec 13 09:26:12 test-preload-923878 kubelet[1193]: E1213 09:26:12.028798    1193 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 13 09:26:12 test-preload-923878 kubelet[1193]: E1213 09:26:12.028871    1193 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed441581-eded-48c6-ad07-2dea59d9b038-config-volume podName:ed441581-eded-48c6-ad07-2dea59d9b038 nodeName:}" failed. No retries permitted until 2025-12-13 09:26:13.028856288 +0000 UTC m=+5.629420584 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ed441581-eded-48c6-ad07-2dea59d9b038-config-volume") pod "coredns-66bc5c9577-s9hrv" (UID: "ed441581-eded-48c6-ad07-2dea59d9b038") : object "kube-system"/"coredns" not registered
	Dec 13 09:26:12 test-preload-923878 kubelet[1193]: E1213 09:26:12.564260    1193 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Dec 13 09:26:13 test-preload-923878 kubelet[1193]: E1213 09:26:13.036336    1193 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 13 09:26:13 test-preload-923878 kubelet[1193]: E1213 09:26:13.036402    1193 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed441581-eded-48c6-ad07-2dea59d9b038-config-volume podName:ed441581-eded-48c6-ad07-2dea59d9b038 nodeName:}" failed. No retries permitted until 2025-12-13 09:26:15.036388831 +0000 UTC m=+7.636953113 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ed441581-eded-48c6-ad07-2dea59d9b038-config-volume") pod "coredns-66bc5c9577-s9hrv" (UID: "ed441581-eded-48c6-ad07-2dea59d9b038") : object "kube-system"/"coredns" not registered
	Dec 13 09:26:13 test-preload-923878 kubelet[1193]: E1213 09:26:13.569923    1193 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-s9hrv" podUID="ed441581-eded-48c6-ad07-2dea59d9b038"
	Dec 13 09:26:15 test-preload-923878 kubelet[1193]: E1213 09:26:15.053626    1193 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 13 09:26:15 test-preload-923878 kubelet[1193]: E1213 09:26:15.053742    1193 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed441581-eded-48c6-ad07-2dea59d9b038-config-volume podName:ed441581-eded-48c6-ad07-2dea59d9b038 nodeName:}" failed. No retries permitted until 2025-12-13 09:26:19.053724047 +0000 UTC m=+11.654288330 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ed441581-eded-48c6-ad07-2dea59d9b038-config-volume") pod "coredns-66bc5c9577-s9hrv" (UID: "ed441581-eded-48c6-ad07-2dea59d9b038") : object "kube-system"/"coredns" not registered
	Dec 13 09:26:15 test-preload-923878 kubelet[1193]: E1213 09:26:15.568766    1193 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-s9hrv" podUID="ed441581-eded-48c6-ad07-2dea59d9b038"
	Dec 13 09:26:17 test-preload-923878 kubelet[1193]: E1213 09:26:17.570221    1193 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765617977567659786 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 13 09:26:17 test-preload-923878 kubelet[1193]: E1213 09:26:17.570243    1193 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765617977567659786 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	
	
	==> storage-provisioner [763c2715f29301ff20d1bbbdc7e52b1724db2c64731372d9340a4302cdaafea6] <==
	I1213 09:26:12.029612       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-923878 -n test-preload-923878
helpers_test.go:270: (dbg) Run:  kubectl --context test-preload-923878 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:176: Cleaning up "test-preload-923878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-923878
--- FAIL: TestPreload (146.65s)

                                                
                                    

Test pass (382/437)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 22.42
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 9.64
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.17
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 9.32
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.16
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.65
31 TestOffline 106.35
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 133.82
40 TestAddons/serial/GCPAuth/Namespaces 0.15
41 TestAddons/serial/GCPAuth/FakeCredentials 11.55
44 TestAddons/parallel/Registry 18.67
45 TestAddons/parallel/RegistryCreds 0.68
47 TestAddons/parallel/InspektorGadget 11.81
48 TestAddons/parallel/MetricsServer 6.81
50 TestAddons/parallel/CSI 52.48
51 TestAddons/parallel/Headlamp 24.12
52 TestAddons/parallel/CloudSpanner 5.54
53 TestAddons/parallel/LocalPath 58.82
54 TestAddons/parallel/NvidiaDevicePlugin 6.86
55 TestAddons/parallel/Yakd 11.98
57 TestAddons/StoppedEnableDisable 82.95
58 TestCertOptions 51.11
59 TestCertExpiration 376.6
61 TestForceSystemdFlag 87.5
62 TestForceSystemdEnv 40.66
67 TestErrorSpam/setup 36.4
68 TestErrorSpam/start 0.34
69 TestErrorSpam/status 0.66
70 TestErrorSpam/pause 1.52
71 TestErrorSpam/unpause 1.72
72 TestErrorSpam/stop 93.78
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 84.46
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 36.03
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.13
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.43
84 TestFunctional/serial/CacheCmd/cache/add_local 2.15
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.55
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 33.51
93 TestFunctional/serial/ComponentHealth 0.06
94 TestFunctional/serial/LogsCmd 1.32
95 TestFunctional/serial/LogsFileCmd 1.31
96 TestFunctional/serial/InvalidService 4.29
98 TestFunctional/parallel/ConfigCmd 0.43
99 TestFunctional/parallel/DashboardCmd 18.89
100 TestFunctional/parallel/DryRun 0.25
101 TestFunctional/parallel/InternationalLanguage 0.12
102 TestFunctional/parallel/StatusCmd 0.74
106 TestFunctional/parallel/ServiceCmdConnect 12.51
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 43.39
110 TestFunctional/parallel/SSHCmd 0.34
111 TestFunctional/parallel/CpCmd 1.25
112 TestFunctional/parallel/MySQL 32.2
113 TestFunctional/parallel/FileSync 0.2
114 TestFunctional/parallel/CertSync 1.18
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
122 TestFunctional/parallel/License 0.32
123 TestFunctional/parallel/ServiceCmd/DeployApp 9.25
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
125 TestFunctional/parallel/ProfileCmd/profile_list 0.47
126 TestFunctional/parallel/MountCmd/any-port 9.38
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
128 TestFunctional/parallel/Version/short 0.08
129 TestFunctional/parallel/Version/components 0.7
131 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
132 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
133 TestFunctional/parallel/ImageCommands/ImageListYaml 2.2
134 TestFunctional/parallel/ImageCommands/ImageBuild 10.67
135 TestFunctional/parallel/ImageCommands/Setup 1.75
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.17
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.69
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.56
140 TestFunctional/parallel/ImageCommands/ImageRemove 1.02
141 TestFunctional/parallel/ServiceCmd/List 0.45
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
143 TestFunctional/parallel/MountCmd/specific-port 1.51
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.76
145 TestFunctional/parallel/ServiceCmd/HTTPS 0.26
146 TestFunctional/parallel/ServiceCmd/Format 0.26
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.63
148 TestFunctional/parallel/ServiceCmd/URL 0.27
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.13
159 TestFunctional/parallel/UpdateContextCmd/no_changes 0.25
160 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
161 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 83.38
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 47.56
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.04
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.09
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.4
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 2.07
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.18
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.55
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 45.76
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.31
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.29
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.32
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.39
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 10.93
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.26
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.12
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 1.04
199 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 9.47
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.18
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 30.2
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.34
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.27
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 39.33
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.23
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.05
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.09
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.44
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.35
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.69
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 10.2
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.39
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.35
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.3
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 9.12
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.4
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.72
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.3
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.33
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.32
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.34
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.21
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.28
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.32
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 4.31
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.86
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.29
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.18
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.5
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.08
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.08
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.09
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.97
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.66
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.73
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.64
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 1.01
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 2.53
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 186.1
262 TestMultiControlPlane/serial/DeployApp 6.73
263 TestMultiControlPlane/serial/PingHostFromPods 1.35
264 TestMultiControlPlane/serial/AddWorkerNode 45.39
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.68
267 TestMultiControlPlane/serial/CopyFile 10.89
268 TestMultiControlPlane/serial/StopSecondaryNode 83.97
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.53
270 TestMultiControlPlane/serial/RestartSecondaryNode 37.42
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.73
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 360.03
273 TestMultiControlPlane/serial/DeleteSecondaryNode 18.53
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
275 TestMultiControlPlane/serial/StopCluster 250.97
276 TestMultiControlPlane/serial/RestartCluster 87.38
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.51
278 TestMultiControlPlane/serial/AddSecondaryNode 73.68
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.67
284 TestJSONOutput/start/Command 74.89
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.7
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.63
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 6.82
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.23
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 77.47
316 TestMountStart/serial/StartWithMountFirst 19.28
317 TestMountStart/serial/VerifyMountFirst 0.31
318 TestMountStart/serial/StartWithMountSecond 21.94
319 TestMountStart/serial/VerifyMountSecond 0.31
320 TestMountStart/serial/DeleteFirst 0.71
321 TestMountStart/serial/VerifyMountPostDelete 0.31
322 TestMountStart/serial/Stop 1.29
323 TestMountStart/serial/RestartStopped 18.58
324 TestMountStart/serial/VerifyMountPostStop 0.3
327 TestMultiNode/serial/FreshStart2Nodes 103.77
328 TestMultiNode/serial/DeployApp2Nodes 5.78
329 TestMultiNode/serial/PingHostFrom2Pods 0.83
330 TestMultiNode/serial/AddNode 41.68
331 TestMultiNode/serial/MultiNodeLabels 0.06
332 TestMultiNode/serial/ProfileList 0.46
333 TestMultiNode/serial/CopyFile 5.96
334 TestMultiNode/serial/StopNode 2.2
335 TestMultiNode/serial/StartAfterStop 40.92
336 TestMultiNode/serial/RestartKeepsNodes 322.9
337 TestMultiNode/serial/DeleteNode 2.49
338 TestMultiNode/serial/StopMultiNode 161.97
339 TestMultiNode/serial/RestartMultiNode 113.14
340 TestMultiNode/serial/ValidateNameConflict 38.12
347 TestScheduledStopUnix 107.5
351 TestRunningBinaryUpgrade 459.06
353 TestKubernetesUpgrade 155.78
356 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
360 TestNoKubernetes/serial/StartWithK8s 77.6
365 TestNetworkPlugins/group/false 3.54
369 TestNoKubernetes/serial/StartWithStopK8s 32.37
370 TestNoKubernetes/serial/Start 38.62
371 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
372 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
373 TestNoKubernetes/serial/ProfileList 0.83
374 TestNoKubernetes/serial/Stop 1.35
375 TestNoKubernetes/serial/StartNoArgs 55.26
376 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.16
377 TestStoppedBinaryUpgrade/Setup 3.24
378 TestStoppedBinaryUpgrade/Upgrade 70.98
379 TestStoppedBinaryUpgrade/MinikubeLogs 1.12
387 TestISOImage/Setup 23.49
389 TestPause/serial/Start 94.14
391 TestISOImage/Binaries/crictl 0.17
392 TestISOImage/Binaries/curl 0.17
393 TestISOImage/Binaries/docker 0.17
394 TestISOImage/Binaries/git 0.16
395 TestISOImage/Binaries/iptables 0.17
396 TestISOImage/Binaries/podman 0.16
397 TestISOImage/Binaries/rsync 0.17
398 TestISOImage/Binaries/socat 0.16
399 TestISOImage/Binaries/wget 0.16
400 TestISOImage/Binaries/VBoxControl 0.17
401 TestISOImage/Binaries/VBoxService 0.17
402 TestNetworkPlugins/group/auto/Start 102.52
403 TestPause/serial/SecondStartNoReconfiguration 35.27
404 TestNetworkPlugins/group/auto/KubeletFlags 0.21
405 TestNetworkPlugins/group/auto/NetCatPod 11.29
406 TestNetworkPlugins/group/kindnet/Start 59.73
407 TestNetworkPlugins/group/auto/DNS 0.17
408 TestNetworkPlugins/group/auto/Localhost 0.14
409 TestNetworkPlugins/group/auto/HairPin 0.13
410 TestPause/serial/Pause 0.86
411 TestPause/serial/VerifyStatus 0.22
412 TestPause/serial/Unpause 0.69
413 TestNetworkPlugins/group/calico/Start 77.16
414 TestPause/serial/PauseAgain 0.86
415 TestPause/serial/DeletePaused 0.88
416 TestPause/serial/VerifyDeletedResources 1.56
417 TestNetworkPlugins/group/custom-flannel/Start 92.6
418 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
419 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
420 TestNetworkPlugins/group/kindnet/NetCatPod 13.32
421 TestNetworkPlugins/group/kindnet/DNS 0.19
422 TestNetworkPlugins/group/kindnet/Localhost 0.16
423 TestNetworkPlugins/group/kindnet/HairPin 0.16
424 TestNetworkPlugins/group/enable-default-cni/Start 85.96
425 TestNetworkPlugins/group/calico/ControllerPod 6.01
426 TestNetworkPlugins/group/calico/KubeletFlags 0.21
427 TestNetworkPlugins/group/calico/NetCatPod 11.81
428 TestNetworkPlugins/group/calico/DNS 0.25
429 TestNetworkPlugins/group/calico/Localhost 0.13
430 TestNetworkPlugins/group/calico/HairPin 0.15
431 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.18
432 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.27
433 TestNetworkPlugins/group/custom-flannel/DNS 0.18
434 TestNetworkPlugins/group/custom-flannel/Localhost 0.25
435 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
436 TestNetworkPlugins/group/flannel/Start 72.65
437 TestNetworkPlugins/group/bridge/Start 65.11
439 TestStartStop/group/old-k8s-version/serial/FirstStart 79.66
440 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
441 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.29
442 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
443 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
444 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
445 TestNetworkPlugins/group/flannel/ControllerPod 6.01
446 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
447 TestNetworkPlugins/group/bridge/NetCatPod 11.3
449 TestStartStop/group/no-preload/serial/FirstStart 95.75
450 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
451 TestNetworkPlugins/group/flannel/NetCatPod 11.32
452 TestNetworkPlugins/group/bridge/DNS 0.18
453 TestNetworkPlugins/group/bridge/Localhost 0.15
454 TestNetworkPlugins/group/bridge/HairPin 0.13
455 TestNetworkPlugins/group/flannel/DNS 0.19
456 TestNetworkPlugins/group/flannel/Localhost 0.14
457 TestNetworkPlugins/group/flannel/HairPin 0.15
458 TestStartStop/group/old-k8s-version/serial/DeployApp 11.52
460 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.79
462 TestStartStop/group/newest-cni/serial/FirstStart 58.06
463 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.18
464 TestStartStop/group/old-k8s-version/serial/Stop 76.4
465 TestStartStop/group/newest-cni/serial/DeployApp 0
466 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.01
467 TestStartStop/group/newest-cni/serial/Stop 7.09
468 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.15
469 TestStartStop/group/newest-cni/serial/SecondStart 33.03
470 TestStartStop/group/no-preload/serial/DeployApp 10.31
471 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.05
472 TestStartStop/group/no-preload/serial/Stop 88.89
473 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
474 TestStartStop/group/old-k8s-version/serial/SecondStart 46.03
475 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.34
476 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
477 TestStartStop/group/default-k8s-diff-port/serial/Stop 86.76
478 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
479 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
480 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
481 TestStartStop/group/newest-cni/serial/Pause 2.78
483 TestStartStop/group/embed-certs/serial/FirstStart 82.98
484 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 19.01
485 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
486 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
487 TestStartStop/group/old-k8s-version/serial/Pause 2.53
489 TestISOImage/PersistentMounts//data 0.17
490 TestISOImage/PersistentMounts//var/lib/docker 0.16
491 TestISOImage/PersistentMounts//var/lib/cni 0.16
492 TestISOImage/PersistentMounts//var/lib/kubelet 0.16
493 TestISOImage/PersistentMounts//var/lib/minikube 0.16
494 TestISOImage/PersistentMounts//var/lib/toolbox 0.16
495 TestISOImage/PersistentMounts//var/lib/boot2docker 0.16
496 TestISOImage/VersionJSON 0.16
497 TestISOImage/eBPFSupport 0.16
498 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.15
499 TestStartStop/group/no-preload/serial/SecondStart 54.59
500 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.15
501 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.94
502 TestStartStop/group/embed-certs/serial/DeployApp 11.37
503 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
504 TestStartStop/group/embed-certs/serial/Stop 85.63
505 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
506 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 7
507 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
508 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
509 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.19
510 TestStartStop/group/no-preload/serial/Pause 2.46
511 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
512 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.47
513 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.14
514 TestStartStop/group/embed-certs/serial/SecondStart 41.76
515 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 13.01
516 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
517 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.2
518 TestStartStop/group/embed-certs/serial/Pause 2.48
x
+
TestDownloadOnly/v1.28.0/json-events (22.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-588045 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-588045 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (22.422623255s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (22.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1213 08:29:22.036674    9697 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1213 08:29:22.036767    9697 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-588045
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-588045: exit status 85 (75.753573ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-588045 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-588045 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:28:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:28:59.670701    9709 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:28:59.670913    9709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:28:59.670921    9709 out.go:374] Setting ErrFile to fd 2...
	I1213 08:28:59.670925    9709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:28:59.671098    9709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
	W1213 08:28:59.671211    9709 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22128-5761/.minikube/config/config.json: open /home/jenkins/minikube-integration/22128-5761/.minikube/config/config.json: no such file or directory
	I1213 08:28:59.671703    9709 out.go:368] Setting JSON to true
	I1213 08:28:59.672589    9709 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":684,"bootTime":1765613856,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:28:59.672655    9709 start.go:143] virtualization: kvm guest
	I1213 08:28:59.678258    9709 out.go:99] [download-only-588045] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 08:28:59.678468    9709 notify.go:221] Checking for updates...
	W1213 08:28:59.678546    9709 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22128-5761/.minikube/cache/preloaded-tarball: no such file or directory
	I1213 08:28:59.679677    9709 out.go:171] MINIKUBE_LOCATION=22128
	I1213 08:28:59.680978    9709 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:28:59.682232    9709 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22128-5761/kubeconfig
	I1213 08:28:59.683447    9709 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5761/.minikube
	I1213 08:28:59.684836    9709 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 08:28:59.687447    9709 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 08:28:59.687744    9709 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:29:00.378093    9709 out.go:99] Using the kvm2 driver based on user configuration
	I1213 08:29:00.378125    9709 start.go:309] selected driver: kvm2
	I1213 08:29:00.378132    9709 start.go:927] validating driver "kvm2" against <nil>
	I1213 08:29:00.378456    9709 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 08:29:00.378946    9709 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1213 08:29:00.379105    9709 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 08:29:00.379125    9709 cni.go:84] Creating CNI manager for ""
	I1213 08:29:00.379171    9709 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 08:29:00.379181    9709 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 08:29:00.379212    9709 start.go:353] cluster config:
	{Name:download-only-588045 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-588045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:29:00.379410    9709 iso.go:125] acquiring lock: {Name:mk6cfae0203e3172b0791a477e21fba41da25205 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:29:00.381078    9709 out.go:99] Downloading VM boot image ...
	I1213 08:29:00.381126    9709 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22128-5761/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso
	I1213 08:29:10.395123    9709 out.go:99] Starting "download-only-588045" primary control-plane node in "download-only-588045" cluster
	I1213 08:29:10.395168    9709 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 08:29:10.482800    9709 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1213 08:29:10.482829    9709 cache.go:65] Caching tarball of preloaded images
	I1213 08:29:10.482997    9709 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 08:29:10.485124    9709 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1213 08:29:10.485160    9709 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1213 08:29:10.585488    9709 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1213 08:29:10.585643    9709 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22128-5761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-588045 host does not exist
	  To start a cluster, run: "minikube start -p download-only-588045"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-588045
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (9.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-463843 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-463843 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.64244025s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (9.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1213 08:29:32.068313    9697 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1213 08:29:32.068359    9697 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-463843
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-463843: exit status 85 (79.389823ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-588045 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-588045 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ delete  │ -p download-only-588045                                                                                                                                                 │ download-only-588045 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-463843 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-463843 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:29:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:29:22.478100    9963 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:29:22.478243    9963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:29:22.478252    9963 out.go:374] Setting ErrFile to fd 2...
	I1213 08:29:22.478258    9963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:29:22.478452    9963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
	I1213 08:29:22.478938    9963 out.go:368] Setting JSON to true
	I1213 08:29:22.479775    9963 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":706,"bootTime":1765613856,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:29:22.479842    9963 start.go:143] virtualization: kvm guest
	I1213 08:29:22.482116    9963 out.go:99] [download-only-463843] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 08:29:22.482362    9963 notify.go:221] Checking for updates...
	I1213 08:29:22.483940    9963 out.go:171] MINIKUBE_LOCATION=22128
	I1213 08:29:22.485690    9963 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:29:22.487208    9963 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22128-5761/kubeconfig
	I1213 08:29:22.488733    9963 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5761/.minikube
	I1213 08:29:22.490373    9963 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 08:29:22.492909    9963 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 08:29:22.493177    9963 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:29:22.526764    9963 out.go:99] Using the kvm2 driver based on user configuration
	I1213 08:29:22.526812    9963 start.go:309] selected driver: kvm2
	I1213 08:29:22.526818    9963 start.go:927] validating driver "kvm2" against <nil>
	I1213 08:29:22.527187    9963 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 08:29:22.527685    9963 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1213 08:29:22.527854    9963 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 08:29:22.527878    9963 cni.go:84] Creating CNI manager for ""
	I1213 08:29:22.527940    9963 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 08:29:22.527953    9963 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 08:29:22.528004    9963 start.go:353] cluster config:
	{Name:download-only-463843 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-463843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:29:22.528116    9963 iso.go:125] acquiring lock: {Name:mk6cfae0203e3172b0791a477e21fba41da25205 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:29:22.529831    9963 out.go:99] Starting "download-only-463843" primary control-plane node in "download-only-463843" cluster
	I1213 08:29:22.529857    9963 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 08:29:22.970869    9963 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 08:29:22.970922    9963 cache.go:65] Caching tarball of preloaded images
	I1213 08:29:22.971087    9963 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 08:29:22.972933    9963 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1213 08:29:22.972954    9963 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1213 08:29:23.069280    9963 preload.go:295] Got checksum from GCS API "40ac2ac600e3e4b9dc7a3f8c6cb2ed91"
	I1213 08:29:23.069349    9963 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:40ac2ac600e3e4b9dc7a3f8c6cb2ed91 -> /home/jenkins/minikube-integration/22128-5761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-463843 host does not exist
	  To start a cluster, run: "minikube start -p download-only-463843"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-463843
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (9.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-433374 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-433374 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.316970981s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (9.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1213 08:29:41.783702    9697 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1213 08:29:41.783734    9697 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-433374
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-433374: exit status 85 (74.236547ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-588045 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-588045 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ delete  │ -p download-only-588045                                                                                                                                                        │ download-only-588045 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-463843 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-463843 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ delete  │ -p download-only-463843                                                                                                                                                        │ download-only-463843 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-433374 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-433374 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:29:32
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:29:32.519343   10154 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:29:32.519600   10154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:29:32.519611   10154 out.go:374] Setting ErrFile to fd 2...
	I1213 08:29:32.519616   10154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:29:32.519809   10154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
	I1213 08:29:32.520257   10154 out.go:368] Setting JSON to true
	I1213 08:29:32.521053   10154 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":716,"bootTime":1765613856,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:29:32.521113   10154 start.go:143] virtualization: kvm guest
	I1213 08:29:32.523253   10154 out.go:99] [download-only-433374] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 08:29:32.523426   10154 notify.go:221] Checking for updates...
	I1213 08:29:32.525155   10154 out.go:171] MINIKUBE_LOCATION=22128
	I1213 08:29:32.526851   10154 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:29:32.528396   10154 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22128-5761/kubeconfig
	I1213 08:29:32.533188   10154 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5761/.minikube
	I1213 08:29:32.534884   10154 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 08:29:32.537834   10154 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 08:29:32.538188   10154 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:29:32.570843   10154 out.go:99] Using the kvm2 driver based on user configuration
	I1213 08:29:32.570882   10154 start.go:309] selected driver: kvm2
	I1213 08:29:32.570889   10154 start.go:927] validating driver "kvm2" against <nil>
	I1213 08:29:32.571229   10154 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 08:29:32.571775   10154 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1213 08:29:32.571925   10154 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 08:29:32.571947   10154 cni.go:84] Creating CNI manager for ""
	I1213 08:29:32.571997   10154 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 08:29:32.572007   10154 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 08:29:32.572042   10154 start.go:353] cluster config:
	{Name:download-only-433374 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-433374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:29:32.572126   10154 iso.go:125] acquiring lock: {Name:mk6cfae0203e3172b0791a477e21fba41da25205 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:29:32.573799   10154 out.go:99] Starting "download-only-433374" primary control-plane node in "download-only-433374" cluster
	I1213 08:29:32.573821   10154 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 08:29:32.720041   10154 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1213 08:29:32.720070   10154 cache.go:65] Caching tarball of preloaded images
	I1213 08:29:32.720238   10154 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 08:29:32.722246   10154 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1213 08:29:32.722274   10154 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1213 08:29:32.816052   10154 preload.go:295] Got checksum from GCS API "b4861df7675d96066744278d08e2cd35"
	I1213 08:29:32.816094   10154 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:b4861df7675d96066744278d08e2cd35 -> /home/jenkins/minikube-integration/22128-5761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-433374 host does not exist
	  To start a cluster, run: "minikube start -p download-only-433374"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-433374
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1213 08:29:42.603325    9697 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-067349 --alsologtostderr --binary-mirror http://127.0.0.1:40107 --driver=kvm2  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-067349" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-067349
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (106.35s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-193786 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-193786 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m44.774618058s)
helpers_test.go:176: Cleaning up "offline-crio-193786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-193786
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-193786: (1.576494812s)
--- PASS: TestOffline (106.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-917695
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-917695: exit status 85 (65.450013ms)

                                                
                                                
-- stdout --
	* Profile "addons-917695" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-917695"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-917695
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-917695: exit status 85 (63.889933ms)

                                                
                                                
-- stdout --
	* Profile "addons-917695" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-917695"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (133.82s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-917695 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-917695 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m13.823426877s)
--- PASS: TestAddons/Setup (133.82s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-917695 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-917695 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.55s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-917695 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-917695 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [7c1bba69-7ed7-4165-8c95-96b84fd3c6d0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [7c1bba69-7ed7-4165-8c95-96b84fd3c6d0] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004753097s
addons_test.go:696: (dbg) Run:  kubectl --context addons-917695 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-917695 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-917695 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.55s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 8.333351ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-jk6nh" [5b9cee4c-b367-49f4-bc49-497edd267414] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005456095s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-6svfh" [64d7a435-6506-4bba-a294-e2111eee1c24] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006610212s
addons_test.go:394: (dbg) Run:  kubectl --context addons-917695 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-917695 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-917695 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.541199561s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-917695 ip
2025/12/13 08:32:35 [DEBUG] GET http://192.168.39.154:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-917695 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.67s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.68s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.88536ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-917695
addons_test.go:334: (dbg) Run:  kubectl --context addons-917695 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-917695 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.68s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-ln9w7" [35c62f75-1eca-4dd0-a16d-0a966d21e6fe] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.006433194s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-917695 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-917695 addons disable inspektor-gadget --alsologtostderr -v=1: (5.807240343s)
--- PASS: TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 5.946487ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-txm49" [b0c671da-5ff1-4882-b011-4feddd170742] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003943693s
addons_test.go:465: (dbg) Run:  kubectl --context addons-917695 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-917695 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1213 08:32:24.909098    9697 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1213 08:32:24.931387    9697 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 08:32:24.931415    9697 kapi.go:107] duration metric: took 22.337515ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 22.347907ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-917695 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-917695 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [facf8525-d931-4a6e-92ca-5f9754a9f5a0] Pending
helpers_test.go:353: "task-pv-pod" [facf8525-d931-4a6e-92ca-5f9754a9f5a0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [facf8525-d931-4a6e-92ca-5f9754a9f5a0] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004034175s
addons_test.go:574: (dbg) Run:  kubectl --context addons-917695 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-917695 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-917695 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-917695 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-917695 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-917695 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-917695 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [66211258-bfc5-4fc4-aa3d-2b5bcec62c0c] Pending
helpers_test.go:353: "task-pv-pod-restore" [66211258-bfc5-4fc4-aa3d-2b5bcec62c0c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [66211258-bfc5-4fc4-aa3d-2b5bcec62c0c] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004085242s
addons_test.go:616: (dbg) Run:  kubectl --context addons-917695 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-917695 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-917695 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-917695 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-917695 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-917695 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.067246291s)
--- PASS: TestAddons/parallel/CSI (52.48s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (24.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-917695 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-917695 --alsologtostderr -v=1: (1.144682808s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-qbm86" [80ba3acc-6f21-4602-ba14-0aea64cb2780] Pending
helpers_test.go:353: "headlamp-dfcdc64b-qbm86" [80ba3acc-6f21-4602-ba14-0aea64cb2780] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-qbm86" [80ba3acc-6f21-4602-ba14-0aea64cb2780] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.005203511s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-917695 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-917695 addons disable headlamp --alsologtostderr -v=1: (5.967998185s)
--- PASS: TestAddons/parallel/Headlamp (24.12s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-znrdz" [3b49142e-d50e-481d-b139-db75fb427a23] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004645529s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-917695 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.82s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-917695 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-917695 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-917695 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [8779e3a0-b1d6-4461-a8a9-24d5f56d30b1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [8779e3a0-b1d6-4461-a8a9-24d5f56d30b1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [8779e3a0-b1d6-4461-a8a9-24d5f56d30b1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.004490083s
addons_test.go:969: (dbg) Run:  kubectl --context addons-917695 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-917695 ssh "cat /opt/local-path-provisioner/pvc-e8937d4d-4320-4d8c-b491-c79dee89d1bb_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-917695 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-917695 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-917695 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-917695 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.977670099s)
--- PASS: TestAddons/parallel/LocalPath (58.82s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.86s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-fc667" [3cb5ce62-9820-4ff4-a96c-d1dd68c20667] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004456798s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-917695 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.86s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-k9pb4" [2ace52b4-d857-49a5-a232-ac83801d0b31] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004275578s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-917695 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-917695 addons disable yakd --alsologtostderr -v=1: (5.979206169s)
--- PASS: TestAddons/parallel/Yakd (11.98s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (82.95s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-917695
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-917695: (1m22.753638971s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-917695
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-917695
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-917695
--- PASS: TestAddons/StoppedEnableDisable (82.95s)

                                                
                                    
x
+
TestCertOptions (51.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-729270 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-729270 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (49.119524358s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-729270 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-729270 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-729270 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-729270" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-729270
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-729270: (1.543833994s)
--- PASS: TestCertOptions (51.11s)

                                                
                                    
x
+
TestCertExpiration (376.6s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-583392 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-583392 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m19.626532126s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-583392 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-583392 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m56.113107577s)
helpers_test.go:176: Cleaning up "cert-expiration-583392" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-583392
--- PASS: TestCertExpiration (376.60s)

                                                
                                    
x
+
TestForceSystemdFlag (87.5s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-472089 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-472089 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m26.434908242s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-472089 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-472089" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-472089
--- PASS: TestForceSystemdFlag (87.50s)

                                                
                                    
x
+
TestForceSystemdEnv (40.66s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-263328 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-263328 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (38.90900224s)
helpers_test.go:176: Cleaning up "force-systemd-env-263328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-263328
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-263328: (1.752955288s)
--- PASS: TestForceSystemdEnv (40.66s)

                                                
                                    
x
+
TestErrorSpam/setup (36.4s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-778395 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-778395 --driver=kvm2  --container-runtime=crio
E1213 08:36:58.342361    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:36:58.348799    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:36:58.360321    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:36:58.381782    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:36:58.423215    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:36:58.504698    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:36:58.666184    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:36:58.987873    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:36:59.629452    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:00.911114    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:03.474102    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:08.595883    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-778395 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-778395 --driver=kvm2  --container-runtime=crio: (36.399933178s)
--- PASS: TestErrorSpam/setup (36.40s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-778395 --log_dir /tmp/nospam-778395 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-778395 --log_dir /tmp/nospam-778395 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-778395 --log_dir /tmp/nospam-778395 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-778395 --log_dir /tmp/nospam-778395 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-778395 --log_dir /tmp/nospam-778395 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-778395 --log_dir /tmp/nospam-778395 status
--- PASS: TestErrorSpam/status (0.66s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-778395 --log_dir /tmp/nospam-778395 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-778395 --log_dir /tmp/nospam-778395 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-778395 --log_dir /tmp/nospam-778395 pause
E1213 08:37:18.837820    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-778395 --log_dir /tmp/nospam-778395 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-778395 --log_dir /tmp/nospam-778395 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-778395 --log_dir /tmp/nospam-778395 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (93.78s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-778395 --log_dir /tmp/nospam-778395 stop
E1213 08:37:39.319608    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:38:20.282371    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-778395 --log_dir /tmp/nospam-778395 stop: (1m30.131524471s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-778395 --log_dir /tmp/nospam-778395 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-778395 --log_dir /tmp/nospam-778395 stop: (1.969881686s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-778395 --log_dir /tmp/nospam-778395 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-778395 --log_dir /tmp/nospam-778395 stop: (1.676134658s)
--- PASS: TestErrorSpam/stop (93.78s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22128-5761/.minikube/files/etc/test/nested/copy/9697/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (84.46s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-014502 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1213 08:39:42.207226    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-014502 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m24.457151054s)
--- PASS: TestFunctional/serial/StartWithProxy (84.46s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.03s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1213 08:40:19.657093    9697 config.go:182] Loaded profile config "functional-014502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-014502 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-014502 --alsologtostderr -v=8: (36.033631705s)
functional_test.go:678: soft start took 36.034321513s for "functional-014502" cluster.
I1213 08:40:55.691137    9697 config.go:182] Loaded profile config "functional-014502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (36.03s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-014502 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-014502 cache add registry.k8s.io/pause:3.1: (1.110210371s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-014502 cache add registry.k8s.io/pause:3.3: (1.192150078s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-014502 cache add registry.k8s.io/pause:latest: (1.131764662s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-014502 /tmp/TestFunctionalserialCacheCmdcacheadd_local952957267/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 cache add minikube-local-cache-test:functional-014502
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-014502 cache add minikube-local-cache-test:functional-014502: (1.79071432s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 cache delete minikube-local-cache-test:functional-014502
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-014502
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014502 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (183.658117ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 kubectl -- --context functional-014502 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-014502 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.51s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-014502 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-014502 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.51088613s)
functional_test.go:776: restart took 33.511042154s for "functional-014502" cluster.
I1213 08:41:37.188269    9697 config.go:182] Loaded profile config "functional-014502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (33.51s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-014502 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-014502 logs: (1.31756878s)
--- PASS: TestFunctional/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 logs --file /tmp/TestFunctionalserialLogsFileCmd368892199/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-014502 logs --file /tmp/TestFunctionalserialLogsFileCmd368892199/001/logs.txt: (1.309076947s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.29s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-014502 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-014502
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-014502: exit status 115 (236.618945ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.248:30506 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-014502 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014502 config get cpus: exit status 14 (71.460092ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014502 config get cpus: exit status 14 (59.123976ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (18.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-014502 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-014502 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 15748: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (18.89s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-014502 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-014502 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (123.251163ms)

                                                
                                                
-- stdout --
	* [functional-014502] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-5761/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5761/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:41:45.597804   15597 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:41:45.598063   15597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:41:45.598073   15597 out.go:374] Setting ErrFile to fd 2...
	I1213 08:41:45.598078   15597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:41:45.598357   15597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
	I1213 08:41:45.598812   15597 out.go:368] Setting JSON to false
	I1213 08:41:45.599991   15597 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1450,"bootTime":1765613856,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:41:45.600065   15597 start.go:143] virtualization: kvm guest
	I1213 08:41:45.601885   15597 out.go:179] * [functional-014502] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 08:41:45.606008   15597 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:41:45.606032   15597 notify.go:221] Checking for updates...
	I1213 08:41:45.608403   15597 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:41:45.609662   15597 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5761/kubeconfig
	I1213 08:41:45.610982   15597 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5761/.minikube
	I1213 08:41:45.613067   15597 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 08:41:45.614283   15597 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:41:45.615978   15597 config.go:182] Loaded profile config "functional-014502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:41:45.616569   15597 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:41:45.652951   15597 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 08:41:45.654212   15597 start.go:309] selected driver: kvm2
	I1213 08:41:45.654230   15597 start.go:927] validating driver "kvm2" against &{Name:functional-014502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-014502 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:41:45.654367   15597 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:41:45.657077   15597 out.go:203] 
	W1213 08:41:45.658520   15597 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 08:41:45.659587   15597 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-014502 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-014502 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-014502 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (114.75952ms)

                                                
                                                
-- stdout --
	* [functional-014502] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-5761/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5761/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:41:45.485392   15581 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:41:45.485497   15581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:41:45.485507   15581 out.go:374] Setting ErrFile to fd 2...
	I1213 08:41:45.485513   15581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:41:45.485831   15581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
	I1213 08:41:45.486226   15581 out.go:368] Setting JSON to false
	I1213 08:41:45.487028   15581 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1449,"bootTime":1765613856,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:41:45.487078   15581 start.go:143] virtualization: kvm guest
	I1213 08:41:45.489103   15581 out.go:179] * [functional-014502] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1213 08:41:45.490439   15581 notify.go:221] Checking for updates...
	I1213 08:41:45.490480   15581 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:41:45.491830   15581 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:41:45.493075   15581 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5761/kubeconfig
	I1213 08:41:45.494158   15581 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5761/.minikube
	I1213 08:41:45.495024   15581 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 08:41:45.496148   15581 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:41:45.497763   15581 config.go:182] Loaded profile config "functional-014502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:41:45.498232   15581 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:41:45.530346   15581 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1213 08:41:45.531630   15581 start.go:309] selected driver: kvm2
	I1213 08:41:45.531651   15581 start.go:927] validating driver "kvm2" against &{Name:functional-014502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-014502 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:41:45.531758   15581 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:41:45.534124   15581 out.go:203] 
	W1213 08:41:45.535372   15581 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 08:41:45.536608   15581 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-014502 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-014502 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-pwrlx" [9e32a59a-f07d-46de-8b77-9e9e1d78dca7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-pwrlx" [9e32a59a-f07d-46de-8b77-9e9e1d78dca7] Running
2025/12/13 08:42:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.003454483s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.248:30573
functional_test.go:1680: http://192.168.39.248:30573: success! body:
Request served by hello-node-connect-7d85dfc575-pwrlx

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.248:30573
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [06623124-2b12-4142-8f40-b485728063cf] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.137565154s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-014502 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-014502 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-014502 get pvc myclaim -o=json
I1213 08:42:00.618465    9697 retry.go:31] will retry after 1.10496904s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:95eef488-c66e-40b5-ac1c-cd7af446f3b8 ResourceVersion:820 Generation:0 CreationTimestamp:2025-12-13 08:42:00 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001beda30 VolumeMode:0xc001beda40 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-014502 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-014502 apply -f testdata/storage-provisioner/pod.yaml
I1213 08:42:01.899677    9697 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [d8907ac0-2cad-49a1-9ef6-79f396d05558] Pending
helpers_test.go:353: "sp-pod" [d8907ac0-2cad-49a1-9ef6-79f396d05558] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [d8907ac0-2cad-49a1-9ef6-79f396d05558] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 28.003594469s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-014502 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-014502 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-014502 delete -f testdata/storage-provisioner/pod.yaml: (1.283618838s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-014502 apply -f testdata/storage-provisioner/pod.yaml
I1213 08:42:31.453721    9697 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [8ecb5b80-5aa3-40d7-81ac-a8af104264b7] Pending
helpers_test.go:353: "sp-pod" [8ecb5b80-5aa3-40d7-81ac-a8af104264b7] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00478238s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-014502 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.39s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh -n functional-014502 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 cp functional-014502:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd957039393/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh -n functional-014502 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh -n functional-014502 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-014502 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-6dkhx" [d7caf935-8806-435d-97af-873fd857df5b] Pending
E1213 08:41:58.333599    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "mysql-6bcdcbc558-6dkhx" [d7caf935-8806-435d-97af-873fd857df5b] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-6dkhx" [d7caf935-8806-435d-97af-873fd857df5b] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.003678937s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-014502 exec mysql-6bcdcbc558-6dkhx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-014502 exec mysql-6bcdcbc558-6dkhx -- mysql -ppassword -e "show databases;": exit status 1 (146.82437ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:42:24.360628    9697 retry.go:31] will retry after 1.427195515s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-014502 exec mysql-6bcdcbc558-6dkhx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-014502 exec mysql-6bcdcbc558-6dkhx -- mysql -ppassword -e "show databases;": exit status 1 (154.447242ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:42:25.943346    9697 retry.go:31] will retry after 758.053845ms: exit status 1
E1213 08:42:26.049508    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1812: (dbg) Run:  kubectl --context functional-014502 exec mysql-6bcdcbc558-6dkhx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-014502 exec mysql-6bcdcbc558-6dkhx -- mysql -ppassword -e "show databases;": exit status 1 (147.523767ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:42:26.850185    9697 retry.go:31] will retry after 3.172279497s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-014502 exec mysql-6bcdcbc558-6dkhx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.20s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9697/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "sudo cat /etc/test/nested/copy/9697/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9697.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "sudo cat /etc/ssl/certs/9697.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9697.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "sudo cat /usr/share/ca-certificates/9697.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/96972.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "sudo cat /etc/ssl/certs/96972.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/96972.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "sudo cat /usr/share/ca-certificates/96972.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-014502 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014502 ssh "sudo systemctl is-active docker": exit status 1 (196.568126ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014502 ssh "sudo systemctl is-active containerd": exit status 1 (231.208398ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-014502 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-014502 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-s7845" [0944646b-1abd-4a7d-a72c-d166e7a1a709] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-s7845" [0944646b-1abd-4a7d-a72c-d166e7a1a709] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004009115s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "409.119199ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "60.917121ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-014502 /tmp/TestFunctionalparallelMountCmdany-port2520949798/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765615304602707047" to /tmp/TestFunctionalparallelMountCmdany-port2520949798/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765615304602707047" to /tmp/TestFunctionalparallelMountCmdany-port2520949798/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765615304602707047" to /tmp/TestFunctionalparallelMountCmdany-port2520949798/001/test-1765615304602707047
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014502 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (220.722729ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:41:44.823760    9697 retry.go:31] will retry after 745.393953ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 08:41 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 08:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 08:41 test-1765615304602707047
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh cat /mount-9p/test-1765615304602707047
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-014502 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [22cad104-91f9-4414-a8ea-6e019381c9d6] Pending
helpers_test.go:353: "busybox-mount" [22cad104-91f9-4414-a8ea-6e019381c9d6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [22cad104-91f9-4414-a8ea-6e019381c9d6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [22cad104-91f9-4414-a8ea-6e019381c9d6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.005225306s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-014502 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-014502 /tmp/TestFunctionalparallelMountCmdany-port2520949798/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "256.549383ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "61.214814ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-014502 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-014502  │ 69bb0f28d3deb │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-014502  │ 9056ab77afb8e │ 4.95MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-014502 image ls --format table --alsologtostderr:
I1213 08:42:10.471833   16574 out.go:360] Setting OutFile to fd 1 ...
I1213 08:42:10.472144   16574 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:42:10.472165   16574 out.go:374] Setting ErrFile to fd 2...
I1213 08:42:10.472172   16574 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:42:10.474626   16574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
I1213 08:42:10.475531   16574 config.go:182] Loaded profile config "functional-014502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:42:10.475680   16574 config.go:182] Loaded profile config "functional-014502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:42:10.478394   16574 ssh_runner.go:195] Run: systemctl --version
I1213 08:42:10.481119   16574 main.go:143] libmachine: domain functional-014502 has defined MAC address 52:54:00:6f:3f:70 in network mk-functional-014502
I1213 08:42:10.481634   16574 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6f:3f:70", ip: ""} in network mk-functional-014502: {Iface:virbr1 ExpiryTime:2025-12-13 09:39:10 +0000 UTC Type:0 Mac:52:54:00:6f:3f:70 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:functional-014502 Clientid:01:52:54:00:6f:3f:70}
I1213 08:42:10.481672   16574 main.go:143] libmachine: domain functional-014502 has defined IP address 192.168.39.248 and MAC address 52:54:00:6f:3f:70 in network mk-functional-014502
I1213 08:42:10.481849   16574 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/functional-014502/id_rsa Username:docker}
I1213 08:42:10.580417   16574 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-014502 image ls --format json --alsologtostderr:
[{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e
1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851
c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.i
o/pause:3.1"],"size":"746911"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-014502"],"size":"4945146"},{"id":"69bb0f28d3deb34bcc9caffcf4a718ee9d125475b90eefa0fe483b7f03e37c1e","repoDigests":["localhost/minikube-local-cache-test@sha256:695a4d9c5400b57ce72aa3af1138
fb074847ffc8af70ae1738315b30f6fb05b3"],"repoTags":["localhost/minikube-local-cache-test:functional-014502"],"size":"3328"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.i
o/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e451
1d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-014502 image ls --format json --alsologtostderr:
I1213 08:42:10.206822   16564 out.go:360] Setting OutFile to fd 1 ...
I1213 08:42:10.206946   16564 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:42:10.206954   16564 out.go:374] Setting ErrFile to fd 2...
I1213 08:42:10.206961   16564 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:42:10.207249   16564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
I1213 08:42:10.208024   16564 config.go:182] Loaded profile config "functional-014502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:42:10.208161   16564 config.go:182] Loaded profile config "functional-014502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:42:10.210589   16564 ssh_runner.go:195] Run: systemctl --version
I1213 08:42:10.213396   16564 main.go:143] libmachine: domain functional-014502 has defined MAC address 52:54:00:6f:3f:70 in network mk-functional-014502
I1213 08:42:10.213865   16564 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6f:3f:70", ip: ""} in network mk-functional-014502: {Iface:virbr1 ExpiryTime:2025-12-13 09:39:10 +0000 UTC Type:0 Mac:52:54:00:6f:3f:70 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:functional-014502 Clientid:01:52:54:00:6f:3f:70}
I1213 08:42:10.213901   16564 main.go:143] libmachine: domain functional-014502 has defined IP address 192.168.39.248 and MAC address 52:54:00:6f:3f:70 in network mk-functional-014502
I1213 08:42:10.214082   16564 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/functional-014502/id_rsa Username:docker}
I1213 08:42:10.314950   16564 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 image ls --format yaml --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-014502 image ls --format yaml --alsologtostderr: (2.203725775s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-014502 image ls --format yaml --alsologtostderr:
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-014502
size: "4945146"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 69bb0f28d3deb34bcc9caffcf4a718ee9d125475b90eefa0fe483b7f03e37c1e
repoDigests:
- localhost/minikube-local-cache-test@sha256:695a4d9c5400b57ce72aa3af1138fb074847ffc8af70ae1738315b30f6fb05b3
repoTags:
- localhost/minikube-local-cache-test:functional-014502
size: "3328"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-014502 image ls --format yaml --alsologtostderr:
I1213 08:42:07.983342   16505 out.go:360] Setting OutFile to fd 1 ...
I1213 08:42:07.983591   16505 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:42:07.983600   16505 out.go:374] Setting ErrFile to fd 2...
I1213 08:42:07.983605   16505 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:42:07.983778   16505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
I1213 08:42:07.984345   16505 config.go:182] Loaded profile config "functional-014502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:42:07.984440   16505 config.go:182] Loaded profile config "functional-014502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:42:07.986630   16505 ssh_runner.go:195] Run: systemctl --version
I1213 08:42:07.989328   16505 main.go:143] libmachine: domain functional-014502 has defined MAC address 52:54:00:6f:3f:70 in network mk-functional-014502
I1213 08:42:07.989816   16505 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6f:3f:70", ip: ""} in network mk-functional-014502: {Iface:virbr1 ExpiryTime:2025-12-13 09:39:10 +0000 UTC Type:0 Mac:52:54:00:6f:3f:70 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:functional-014502 Clientid:01:52:54:00:6f:3f:70}
I1213 08:42:07.989839   16505 main.go:143] libmachine: domain functional-014502 has defined IP address 192.168.39.248 and MAC address 52:54:00:6f:3f:70 in network mk-functional-014502
I1213 08:42:07.989999   16505 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/functional-014502/id_rsa Username:docker}
I1213 08:42:08.086522   16505 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 08:42:10.125262   16505 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.038705064s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (10.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014502 ssh pgrep buildkitd: exit status 1 (178.533975ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 image build -t localhost/my-image:functional-014502 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-014502 image build -t localhost/my-image:functional-014502 testdata/build --alsologtostderr: (10.292545072s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-014502 image build -t localhost/my-image:functional-014502 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4988e76df2c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-014502
--> 02c706ad8dd
Successfully tagged localhost/my-image:functional-014502
02c706ad8ddb5e4cf4a3224b701a08b7c9033298c109a75e46a509a0e11f94c0
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-014502 image build -t localhost/my-image:functional-014502 testdata/build --alsologtostderr:
I1213 08:42:08.711437   16554 out.go:360] Setting OutFile to fd 1 ...
I1213 08:42:08.711804   16554 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:42:08.711817   16554 out.go:374] Setting ErrFile to fd 2...
I1213 08:42:08.711824   16554 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:42:08.712171   16554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
I1213 08:42:08.712965   16554 config.go:182] Loaded profile config "functional-014502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:42:08.713783   16554 config.go:182] Loaded profile config "functional-014502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:42:08.716445   16554 ssh_runner.go:195] Run: systemctl --version
I1213 08:42:08.719420   16554 main.go:143] libmachine: domain functional-014502 has defined MAC address 52:54:00:6f:3f:70 in network mk-functional-014502
I1213 08:42:08.719925   16554 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6f:3f:70", ip: ""} in network mk-functional-014502: {Iface:virbr1 ExpiryTime:2025-12-13 09:39:10 +0000 UTC Type:0 Mac:52:54:00:6f:3f:70 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:functional-014502 Clientid:01:52:54:00:6f:3f:70}
I1213 08:42:08.719964   16554 main.go:143] libmachine: domain functional-014502 has defined IP address 192.168.39.248 and MAC address 52:54:00:6f:3f:70 in network mk-functional-014502
I1213 08:42:08.720131   16554 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/functional-014502/id_rsa Username:docker}
I1213 08:42:08.843985   16554 build_images.go:162] Building image from path: /tmp/build.2514110903.tar
I1213 08:42:08.844067   16554 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 08:42:08.861219   16554 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2514110903.tar
I1213 08:42:08.869231   16554 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2514110903.tar: stat -c "%s %y" /var/lib/minikube/build/build.2514110903.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2514110903.tar': No such file or directory
I1213 08:42:08.869272   16554 ssh_runner.go:362] scp /tmp/build.2514110903.tar --> /var/lib/minikube/build/build.2514110903.tar (3072 bytes)
I1213 08:42:08.940181   16554 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2514110903
I1213 08:42:08.960405   16554 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2514110903 -xf /var/lib/minikube/build/build.2514110903.tar
I1213 08:42:08.977541   16554 crio.go:315] Building image: /var/lib/minikube/build/build.2514110903
I1213 08:42:08.977620   16554 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-014502 /var/lib/minikube/build/build.2514110903 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1213 08:42:18.891965   16554 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-014502 /var/lib/minikube/build/build.2514110903 --cgroup-manager=cgroupfs: (9.914322086s)
I1213 08:42:18.892060   16554 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2514110903
I1213 08:42:18.913412   16554 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2514110903.tar
I1213 08:42:18.931446   16554 build_images.go:218] Built localhost/my-image:functional-014502 from /tmp/build.2514110903.tar
I1213 08:42:18.931485   16554 build_images.go:134] succeeded building to: functional-014502
I1213 08:42:18.931489   16554 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (10.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.730676949s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-014502
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 image load --daemon kicbase/echo-server:functional-014502 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 image load --daemon kicbase/echo-server:functional-014502 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-014502
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 image load --daemon kicbase/echo-server:functional-014502 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 image save kicbase/echo-server:functional-014502 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 image rm kicbase/echo-server:functional-014502 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 service list -o json
functional_test.go:1504: Took "457.446441ms" to run "out/minikube-linux-amd64 -p functional-014502 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-014502 /tmp/TestFunctionalparallelMountCmdspecific-port3996513752/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014502 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (181.138536ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:41:54.159383    9697 retry.go:31] will retry after 581.357932ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-014502 /tmp/TestFunctionalparallelMountCmdspecific-port3996513752/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014502 ssh "sudo umount -f /mount-9p": exit status 1 (169.550097ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-014502 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-014502 /tmp/TestFunctionalparallelMountCmdspecific-port3996513752/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.248:31863
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-014502
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 image save --daemon kicbase/echo-server:functional-014502 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-014502
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.248:31863
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-014502 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3479942592/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-014502 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3479942592/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-014502 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3479942592/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014502 ssh "findmnt -T" /mount1: exit status 1 (175.054797ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:41:55.661868    9697 retry.go:31] will retry after 346.378219ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-014502 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-014502 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3479942592/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-014502 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3479942592/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-014502 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3479942592/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-014502 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-014502
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-014502
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-014502
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22128-5761/.minikube/files/etc/test/nested/copy/9697/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (83.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-589798 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-589798 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m23.382950753s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (83.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (47.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1213 08:44:03.035331    9697 config.go:182] Loaded profile config "functional-589798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-589798 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-589798 --alsologtostderr -v=8: (47.564276566s)
functional_test.go:678: soft start took 47.56465442s for "functional-589798" cluster.
I1213 08:44:50.600038    9697 config.go:182] Loaded profile config "functional-589798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (47.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-589798 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-589798 cache add registry.k8s.io/pause:3.1: (1.127768092s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-589798 cache add registry.k8s.io/pause:3.3: (1.150394352s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-589798 cache add registry.k8s.io/pause:latest: (1.123839481s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-589798 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach48523306/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 cache add minikube-local-cache-test:functional-589798
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-589798 cache add minikube-local-cache-test:functional-589798: (1.763386866s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 cache delete minikube-local-cache-test:functional-589798
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-589798
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-589798 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (175.22958ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 kubectl -- --context functional-589798 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-589798 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (45.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-589798 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-589798 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.76134767s)
functional_test.go:776: restart took 45.761456397s for "functional-589798" cluster.
I1213 08:45:44.179315    9697 config.go:182] Loaded profile config "functional-589798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (45.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-589798 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-589798 logs: (1.305626873s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3614378426/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-589798 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3614378426/001/logs.txt: (1.285451381s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-589798 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-589798
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-589798: exit status 115 (231.930856ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.215:31912 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-589798 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-589798 config get cpus: exit status 14 (59.016243ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-589798 config get cpus: exit status 14 (63.155096ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (10.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-589798 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-589798 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 19025: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (10.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-589798 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-589798 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (135.183695ms)

                                                
                                                
-- stdout --
	* [functional-589798] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-5761/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5761/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:46:02.654200   18745 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:46:02.654563   18745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:46:02.654579   18745 out.go:374] Setting ErrFile to fd 2...
	I1213 08:46:02.654586   18745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:46:02.654913   18745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
	I1213 08:46:02.655520   18745 out.go:368] Setting JSON to false
	I1213 08:46:02.656722   18745 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1707,"bootTime":1765613856,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:46:02.656801   18745 start.go:143] virtualization: kvm guest
	I1213 08:46:02.658714   18745 out.go:179] * [functional-589798] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 08:46:02.660033   18745 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:46:02.660053   18745 notify.go:221] Checking for updates...
	I1213 08:46:02.662663   18745 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:46:02.663969   18745 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5761/kubeconfig
	I1213 08:46:02.665387   18745 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5761/.minikube
	I1213 08:46:02.666955   18745 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 08:46:02.668544   18745 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:46:02.670553   18745 config.go:182] Loaded profile config "functional-589798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 08:46:02.671409   18745 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:46:02.704710   18745 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 08:46:02.706309   18745 start.go:309] selected driver: kvm2
	I1213 08:46:02.706329   18745 start.go:927] validating driver "kvm2" against &{Name:functional-589798 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-589798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:46:02.706452   18745 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:46:02.708852   18745 out.go:203] 
	W1213 08:46:02.710179   18745 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 08:46:02.711541   18745 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-589798 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-589798 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-589798 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (119.132187ms)

                                                
                                                
-- stdout --
	* [functional-589798] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-5761/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5761/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:46:02.080797   18688 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:46:02.080901   18688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:46:02.080910   18688 out.go:374] Setting ErrFile to fd 2...
	I1213 08:46:02.080915   18688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:46:02.081320   18688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
	I1213 08:46:02.081796   18688 out.go:368] Setting JSON to false
	I1213 08:46:02.082757   18688 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1706,"bootTime":1765613856,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:46:02.082819   18688 start.go:143] virtualization: kvm guest
	I1213 08:46:02.084996   18688 out.go:179] * [functional-589798] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1213 08:46:02.086324   18688 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:46:02.086329   18688 notify.go:221] Checking for updates...
	I1213 08:46:02.088681   18688 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:46:02.089958   18688 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5761/kubeconfig
	I1213 08:46:02.091213   18688 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5761/.minikube
	I1213 08:46:02.092510   18688 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 08:46:02.093772   18688 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:46:02.095684   18688 config.go:182] Loaded profile config "functional-589798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 08:46:02.096216   18688 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:46:02.129867   18688 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1213 08:46:02.131266   18688 start.go:309] selected driver: kvm2
	I1213 08:46:02.131280   18688 start.go:927] validating driver "kvm2" against &{Name:functional-589798 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-589798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:46:02.131446   18688 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:46:02.133729   18688 out.go:203] 
	W1213 08:46:02.135497   18688 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 08:46:02.136751   18688 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (9.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-589798 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-589798 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-chspj" [5ac54a53-22af-41d6-b51b-af13073cf68b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-chspj" [5ac54a53-22af-41d6-b51b-af13073cf68b] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004382427s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.215:31032
functional_test.go:1680: http://192.168.39.215:31032: success! body:
Request served by hello-node-connect-9f67c86d4-chspj

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.215:31032
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (9.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (30.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [478c6a70-7c76-4371-929d-9e9dbcd102a0] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003917721s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-589798 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-589798 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-589798 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-589798 apply -f testdata/storage-provisioner/pod.yaml
I1213 08:45:57.621270    9697 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [31b22d97-4071-4072-ade4-6ce787dd34c7] Pending
helpers_test.go:353: "sp-pod" [31b22d97-4071-4072-ade4-6ce787dd34c7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [31b22d97-4071-4072-ade4-6ce787dd34c7] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.00699681s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-589798 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-589798 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-589798 delete -f testdata/storage-provisioner/pod.yaml: (3.627107709s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-589798 apply -f testdata/storage-provisioner/pod.yaml
I1213 08:46:13.583741    9697 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [c47d181c-0d20-4a7b-a4e6-24154ab60669] Pending
helpers_test.go:353: "sp-pod" [c47d181c-0d20-4a7b-a4e6-24154ab60669] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00827951s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-589798 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (30.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh -n functional-589798 "sudo cat /home/docker/cp-test.txt"
2025/12/13 08:46:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 cp functional-589798:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1812814735/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh -n functional-589798 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh -n functional-589798 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (39.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-589798 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-n7sfg" [701b4fbf-d941-47ca-b610-d9ef433cf989] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-n7sfg" [701b4fbf-d941-47ca-b610-d9ef433cf989] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 32.004358446s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-589798 exec mysql-7d7b65bc95-n7sfg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-589798 exec mysql-7d7b65bc95-n7sfg -- mysql -ppassword -e "show databases;": exit status 1 (178.19693ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:46:37.483432    9697 retry.go:31] will retry after 820.218661ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-589798 exec mysql-7d7b65bc95-n7sfg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-589798 exec mysql-7d7b65bc95-n7sfg -- mysql -ppassword -e "show databases;": exit status 1 (176.769585ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:46:38.481442    9697 retry.go:31] will retry after 2.13408119s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-589798 exec mysql-7d7b65bc95-n7sfg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-589798 exec mysql-7d7b65bc95-n7sfg -- mysql -ppassword -e "show databases;": exit status 1 (168.926678ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:46:40.786856    9697 retry.go:31] will retry after 1.341201097s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-589798 exec mysql-7d7b65bc95-n7sfg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-589798 exec mysql-7d7b65bc95-n7sfg -- mysql -ppassword -e "show databases;": exit status 1 (119.303154ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:46:42.248084    9697 retry.go:31] will retry after 1.990992163s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-589798 exec mysql-7d7b65bc95-n7sfg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (39.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9697/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "sudo cat /etc/test/nested/copy/9697/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9697.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "sudo cat /etc/ssl/certs/9697.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9697.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "sudo cat /usr/share/ca-certificates/9697.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/96972.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "sudo cat /etc/ssl/certs/96972.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/96972.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "sudo cat /usr/share/ca-certificates/96972.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-589798 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-589798 ssh "sudo systemctl is-active docker": exit status 1 (214.256687ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-589798 ssh "sudo systemctl is-active containerd": exit status 1 (227.225451ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (10.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-589798 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-589798 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-jflws" [134e5223-1fce-41e7-93c6-3fff2edef2fb] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-jflws" [134e5223-1fce-41e7-93c6-3fff2edef2fb] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.006791291s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (10.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "292.869393ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "60.106176ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "238.862217ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "58.98676ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (9.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-589798 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3158758019/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765615552717406226" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3158758019/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765615552717406226" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3158758019/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765615552717406226" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3158758019/001/test-1765615552717406226
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-589798 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (163.908237ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:45:52.881619    9697 retry.go:31] will retry after 517.336933ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 08:45 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 08:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 08:45 test-1765615552717406226
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh cat /mount-9p/test-1765615552717406226
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-589798 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [bfdc93ef-367a-4f5d-a4f7-e878bc44f6ce] Pending
helpers_test.go:353: "busybox-mount" [bfdc93ef-367a-4f5d-a4f7-e878bc44f6ce] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [bfdc93ef-367a-4f5d-a4f7-e878bc44f6ce] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [bfdc93ef-367a-4f5d-a4f7-e878bc44f6ce] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004434817s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-589798 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-589798 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3158758019/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (9.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-589798 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4266139454/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-589798 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (216.535659ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:46:02.056998    9697 retry.go:31] will retry after 745.651404ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-589798 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4266139454/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-589798 ssh "sudo umount -f /mount-9p": exit status 1 (170.733047ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-589798 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-589798 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4266139454/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 service list -o json
functional_test.go:1504: Took "303.261342ms" to run "out/minikube-linux-amd64 -p functional-589798 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.215:32167
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-589798 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-589798
localhost/kicbase/echo-server:functional-589798
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-589798 image ls --format short --alsologtostderr:
I1213 08:46:15.257459   19420 out.go:360] Setting OutFile to fd 1 ...
I1213 08:46:15.257630   19420 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:46:15.257644   19420 out.go:374] Setting ErrFile to fd 2...
I1213 08:46:15.257650   19420 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:46:15.258025   19420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
I1213 08:46:15.258889   19420 config.go:182] Loaded profile config "functional-589798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 08:46:15.259041   19420 config.go:182] Loaded profile config "functional-589798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 08:46:15.261566   19420 ssh_runner.go:195] Run: systemctl --version
I1213 08:46:15.264098   19420 main.go:143] libmachine: domain functional-589798 has defined MAC address 52:54:00:83:37:cf in network mk-functional-589798
I1213 08:46:15.264559   19420 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:83:37:cf", ip: ""} in network mk-functional-589798: {Iface:virbr1 ExpiryTime:2025-12-13 09:42:54 +0000 UTC Type:0 Mac:52:54:00:83:37:cf Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:functional-589798 Clientid:01:52:54:00:83:37:cf}
I1213 08:46:15.264582   19420 main.go:143] libmachine: domain functional-589798 has defined IP address 192.168.39.215 and MAC address 52:54:00:83:37:cf in network mk-functional-589798
I1213 08:46:15.264706   19420 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/functional-589798/id_rsa Username:docker}
I1213 08:46:15.368696   19420 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-589798 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-589798  │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-589798  │ 69bb0f28d3deb │ 3.33kB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-589798 image ls --format table --alsologtostderr:
I1213 08:46:15.872342   19471 out.go:360] Setting OutFile to fd 1 ...
I1213 08:46:15.872446   19471 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:46:15.872458   19471 out.go:374] Setting ErrFile to fd 2...
I1213 08:46:15.872465   19471 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:46:15.872686   19471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
I1213 08:46:15.873211   19471 config.go:182] Loaded profile config "functional-589798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 08:46:15.873315   19471 config.go:182] Loaded profile config "functional-589798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 08:46:15.875588   19471 ssh_runner.go:195] Run: systemctl --version
I1213 08:46:15.878132   19471 main.go:143] libmachine: domain functional-589798 has defined MAC address 52:54:00:83:37:cf in network mk-functional-589798
I1213 08:46:15.878598   19471 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:83:37:cf", ip: ""} in network mk-functional-589798: {Iface:virbr1 ExpiryTime:2025-12-13 09:42:54 +0000 UTC Type:0 Mac:52:54:00:83:37:cf Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:functional-589798 Clientid:01:52:54:00:83:37:cf}
I1213 08:46:15.878628   19471 main.go:143] libmachine: domain functional-589798 has defined IP address 192.168.39.215 and MAC address 52:54:00:83:37:cf in network mk-functional-589798
I1213 08:46:15.878776   19471 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/functional-589798/id_rsa Username:docker}
I1213 08:46:15.971432   19471 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-589798 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9
187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/ku
be-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77
afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-589798"],"size":"4944818"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","dock
er.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e5139252
4dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"69bb0f28d3deb34bcc9caffcf4a718ee9d125475b90eefa0fe483b7f03e37c1e","repoDigests":["localhost/minikube-local-cache-test@sha256:695a4d9c5400b57ce72aa3af1138fb074847ffc8af70ae
1738315b30f6fb05b3"],"repoTags":["localhost/minikube-local-cache-test:functional-589798"],"size":"3328"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-589798 image ls --format json --alsologtostderr:
I1213 08:46:15.601979   19440 out.go:360] Setting OutFile to fd 1 ...
I1213 08:46:15.602101   19440 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:46:15.602109   19440 out.go:374] Setting ErrFile to fd 2...
I1213 08:46:15.602117   19440 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:46:15.602411   19440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
I1213 08:46:15.603194   19440 config.go:182] Loaded profile config "functional-589798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 08:46:15.603354   19440 config.go:182] Loaded profile config "functional-589798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 08:46:15.605634   19440 ssh_runner.go:195] Run: systemctl --version
I1213 08:46:15.608171   19440 main.go:143] libmachine: domain functional-589798 has defined MAC address 52:54:00:83:37:cf in network mk-functional-589798
I1213 08:46:15.608749   19440 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:83:37:cf", ip: ""} in network mk-functional-589798: {Iface:virbr1 ExpiryTime:2025-12-13 09:42:54 +0000 UTC Type:0 Mac:52:54:00:83:37:cf Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:functional-589798 Clientid:01:52:54:00:83:37:cf}
I1213 08:46:15.608779   19440 main.go:143] libmachine: domain functional-589798 has defined IP address 192.168.39.215 and MAC address 52:54:00:83:37:cf in network mk-functional-589798
I1213 08:46:15.608957   19440 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/functional-589798/id_rsa Username:docker}
I1213 08:46:15.718253   19440 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-589798 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 69bb0f28d3deb34bcc9caffcf4a718ee9d125475b90eefa0fe483b7f03e37c1e
repoDigests:
- localhost/minikube-local-cache-test@sha256:695a4d9c5400b57ce72aa3af1138fb074847ffc8af70ae1738315b30f6fb05b3
repoTags:
- localhost/minikube-local-cache-test:functional-589798
size: "3328"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-589798
size: "4944818"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-589798 image ls --format yaml --alsologtostderr:
I1213 08:46:15.280033   19430 out.go:360] Setting OutFile to fd 1 ...
I1213 08:46:15.280176   19430 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:46:15.280192   19430 out.go:374] Setting ErrFile to fd 2...
I1213 08:46:15.280200   19430 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:46:15.280602   19430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
I1213 08:46:15.281507   19430 config.go:182] Loaded profile config "functional-589798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 08:46:15.281671   19430 config.go:182] Loaded profile config "functional-589798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 08:46:15.284687   19430 ssh_runner.go:195] Run: systemctl --version
I1213 08:46:15.287828   19430 main.go:143] libmachine: domain functional-589798 has defined MAC address 52:54:00:83:37:cf in network mk-functional-589798
I1213 08:46:15.288471   19430 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:83:37:cf", ip: ""} in network mk-functional-589798: {Iface:virbr1 ExpiryTime:2025-12-13 09:42:54 +0000 UTC Type:0 Mac:52:54:00:83:37:cf Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:functional-589798 Clientid:01:52:54:00:83:37:cf}
I1213 08:46:15.288510   19430 main.go:143] libmachine: domain functional-589798 has defined IP address 192.168.39.215 and MAC address 52:54:00:83:37:cf in network mk-functional-589798
I1213 08:46:15.288707   19430 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/functional-589798/id_rsa Username:docker}
I1213 08:46:15.411320   19430 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-589798 ssh pgrep buildkitd: exit status 1 (180.093915ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 image build -t localhost/my-image:functional-589798 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-589798 image build -t localhost/my-image:functional-589798 testdata/build --alsologtostderr: (3.707878945s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-589798 image build -t localhost/my-image:functional-589798 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c4f75b3e51b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-589798
--> f7f01a4b833
Successfully tagged localhost/my-image:functional-589798
f7f01a4b833ee37c5039f5ad043ada7067cdc4309eef2c0e656b144ecd99974a
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-589798 image build -t localhost/my-image:functional-589798 testdata/build --alsologtostderr:
I1213 08:46:15.780117   19460 out.go:360] Setting OutFile to fd 1 ...
I1213 08:46:15.780237   19460 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:46:15.780246   19460 out.go:374] Setting ErrFile to fd 2...
I1213 08:46:15.780250   19460 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:46:15.780502   19460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
I1213 08:46:15.781084   19460 config.go:182] Loaded profile config "functional-589798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 08:46:15.781828   19460 config.go:182] Loaded profile config "functional-589798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 08:46:15.784094   19460 ssh_runner.go:195] Run: systemctl --version
I1213 08:46:15.786194   19460 main.go:143] libmachine: domain functional-589798 has defined MAC address 52:54:00:83:37:cf in network mk-functional-589798
I1213 08:46:15.786614   19460 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:83:37:cf", ip: ""} in network mk-functional-589798: {Iface:virbr1 ExpiryTime:2025-12-13 09:42:54 +0000 UTC Type:0 Mac:52:54:00:83:37:cf Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:functional-589798 Clientid:01:52:54:00:83:37:cf}
I1213 08:46:15.786638   19460 main.go:143] libmachine: domain functional-589798 has defined IP address 192.168.39.215 and MAC address 52:54:00:83:37:cf in network mk-functional-589798
I1213 08:46:15.786804   19460 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/functional-589798/id_rsa Username:docker}
I1213 08:46:15.887415   19460 build_images.go:162] Building image from path: /tmp/build.1879334502.tar
I1213 08:46:15.887477   19460 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 08:46:15.902469   19460 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1879334502.tar
I1213 08:46:15.907925   19460 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1879334502.tar: stat -c "%s %y" /var/lib/minikube/build/build.1879334502.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1879334502.tar': No such file or directory
I1213 08:46:15.907962   19460 ssh_runner.go:362] scp /tmp/build.1879334502.tar --> /var/lib/minikube/build/build.1879334502.tar (3072 bytes)
I1213 08:46:15.946007   19460 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1879334502
I1213 08:46:15.958851   19460 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1879334502 -xf /var/lib/minikube/build/build.1879334502.tar
I1213 08:46:15.973599   19460 crio.go:315] Building image: /var/lib/minikube/build/build.1879334502
I1213 08:46:15.973657   19460 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-589798 /var/lib/minikube/build/build.1879334502 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1213 08:46:19.363626   19460 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-589798 /var/lib/minikube/build/build.1879334502 --cgroup-manager=cgroupfs: (3.389944568s)
I1213 08:46:19.363697   19460 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1879334502
I1213 08:46:19.389233   19460 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1879334502.tar
I1213 08:46:19.411099   19460 build_images.go:218] Built localhost/my-image:functional-589798 from /tmp/build.1879334502.tar
I1213 08:46:19.411139   19460 build_images.go:134] succeeded building to: functional-589798
I1213 08:46:19.411145   19460 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-589798
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.215:32167
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-589798 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo295563748/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-589798 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo295563748/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-589798 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo295563748/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-589798 ssh "findmnt -T" /mount1: exit status 1 (221.446938ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:46:03.783243    9697 retry.go:31] will retry after 315.086391ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-589798 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-589798 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo295563748/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-589798 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo295563748/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-589798 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo295563748/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 image load --daemon kicbase/echo-server:functional-589798 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-589798 image load --daemon kicbase/echo-server:functional-589798 --alsologtostderr: (1.224547335s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 image load --daemon kicbase/echo-server:functional-589798 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-589798
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 image load --daemon kicbase/echo-server:functional-589798 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 image save kicbase/echo-server:functional-589798 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 image rm kicbase/echo-server:functional-589798 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (1.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (1.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (2.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-589798
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-589798 image save --daemon kicbase/echo-server:functional-589798 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-linux-amd64 -p functional-589798 image save --daemon kicbase/echo-server:functional-589798 --alsologtostderr: (2.48840313s)
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-589798
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (2.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-589798
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-589798
E1213 08:46:44.423362    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:46:44.429696    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-589798
E1213 08:46:44.441345    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (186.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1213 08:46:45.711177    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:46:46.993147    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:46:49.555021    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:46:54.677233    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:46:58.332721    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:47:04.919505    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:47:25.401533    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:48:06.364791    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:49:28.286205    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-273858 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m5.538998734s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (186.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-273858 kubectl -- rollout status deployment/busybox: (4.37856161s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 kubectl -- exec busybox-7b57f96db7-95qnt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 kubectl -- exec busybox-7b57f96db7-qjcgv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 kubectl -- exec busybox-7b57f96db7-wvjs9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 kubectl -- exec busybox-7b57f96db7-95qnt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 kubectl -- exec busybox-7b57f96db7-qjcgv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 kubectl -- exec busybox-7b57f96db7-wvjs9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 kubectl -- exec busybox-7b57f96db7-95qnt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 kubectl -- exec busybox-7b57f96db7-qjcgv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 kubectl -- exec busybox-7b57f96db7-wvjs9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 kubectl -- exec busybox-7b57f96db7-95qnt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 kubectl -- exec busybox-7b57f96db7-95qnt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 kubectl -- exec busybox-7b57f96db7-qjcgv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 kubectl -- exec busybox-7b57f96db7-qjcgv -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 kubectl -- exec busybox-7b57f96db7-wvjs9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 kubectl -- exec busybox-7b57f96db7-wvjs9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (45.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-273858 node add --alsologtostderr -v 5: (44.697467382s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (45.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-273858 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 cp testdata/cp-test.txt ha-273858:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 cp ha-273858:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile841445809/001/cp-test_ha-273858.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 cp ha-273858:/home/docker/cp-test.txt ha-273858-m02:/home/docker/cp-test_ha-273858_ha-273858-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m02 "sudo cat /home/docker/cp-test_ha-273858_ha-273858-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 cp ha-273858:/home/docker/cp-test.txt ha-273858-m03:/home/docker/cp-test_ha-273858_ha-273858-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m03 "sudo cat /home/docker/cp-test_ha-273858_ha-273858-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 cp ha-273858:/home/docker/cp-test.txt ha-273858-m04:/home/docker/cp-test_ha-273858_ha-273858-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m04 "sudo cat /home/docker/cp-test_ha-273858_ha-273858-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 cp testdata/cp-test.txt ha-273858-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 cp ha-273858-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile841445809/001/cp-test_ha-273858-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 cp ha-273858-m02:/home/docker/cp-test.txt ha-273858:/home/docker/cp-test_ha-273858-m02_ha-273858.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858 "sudo cat /home/docker/cp-test_ha-273858-m02_ha-273858.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 cp ha-273858-m02:/home/docker/cp-test.txt ha-273858-m03:/home/docker/cp-test_ha-273858-m02_ha-273858-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m03 "sudo cat /home/docker/cp-test_ha-273858-m02_ha-273858-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 cp ha-273858-m02:/home/docker/cp-test.txt ha-273858-m04:/home/docker/cp-test_ha-273858-m02_ha-273858-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m02 "sudo cat /home/docker/cp-test.txt"
E1213 08:50:51.251188    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:50:51.257574    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:50:51.268946    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:50:51.290401    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:50:51.331847    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m04 "sudo cat /home/docker/cp-test_ha-273858-m02_ha-273858-m04.txt"
E1213 08:50:51.413901    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 cp testdata/cp-test.txt ha-273858-m03:/home/docker/cp-test.txt
E1213 08:50:51.575206    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 cp ha-273858-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile841445809/001/cp-test_ha-273858-m03.txt
E1213 08:50:51.897630    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 cp ha-273858-m03:/home/docker/cp-test.txt ha-273858:/home/docker/cp-test_ha-273858-m03_ha-273858.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m03 "sudo cat /home/docker/cp-test.txt"
E1213 08:50:52.539538    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858 "sudo cat /home/docker/cp-test_ha-273858-m03_ha-273858.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 cp ha-273858-m03:/home/docker/cp-test.txt ha-273858-m02:/home/docker/cp-test_ha-273858-m03_ha-273858-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m02 "sudo cat /home/docker/cp-test_ha-273858-m03_ha-273858-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 cp ha-273858-m03:/home/docker/cp-test.txt ha-273858-m04:/home/docker/cp-test_ha-273858-m03_ha-273858-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m03 "sudo cat /home/docker/cp-test.txt"
E1213 08:50:53.821523    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m04 "sudo cat /home/docker/cp-test_ha-273858-m03_ha-273858-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 cp testdata/cp-test.txt ha-273858-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 cp ha-273858-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile841445809/001/cp-test_ha-273858-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 cp ha-273858-m04:/home/docker/cp-test.txt ha-273858:/home/docker/cp-test_ha-273858-m04_ha-273858.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858 "sudo cat /home/docker/cp-test_ha-273858-m04_ha-273858.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 cp ha-273858-m04:/home/docker/cp-test.txt ha-273858-m02:/home/docker/cp-test_ha-273858-m04_ha-273858-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m02 "sudo cat /home/docker/cp-test_ha-273858-m04_ha-273858-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 cp ha-273858-m04:/home/docker/cp-test.txt ha-273858-m03:/home/docker/cp-test_ha-273858-m04_ha-273858-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m04 "sudo cat /home/docker/cp-test.txt"
E1213 08:50:56.382899    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 ssh -n ha-273858-m03 "sudo cat /home/docker/cp-test_ha-273858-m04_ha-273858-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (83.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 node stop m02 --alsologtostderr -v 5
E1213 08:51:01.504679    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:51:11.746775    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:51:32.228761    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:51:44.425482    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:51:58.333518    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:52:12.127978    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:52:13.190232    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-273858 node stop m02 --alsologtostderr -v 5: (1m23.475858609s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-273858 status --alsologtostderr -v 5: exit status 7 (492.749857ms)

                                                
                                                
-- stdout --
	ha-273858
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-273858-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-273858-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-273858-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:52:20.173277   22619 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:52:20.173602   22619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:52:20.173611   22619 out.go:374] Setting ErrFile to fd 2...
	I1213 08:52:20.173615   22619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:52:20.173804   22619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
	I1213 08:52:20.173989   22619 out.go:368] Setting JSON to false
	I1213 08:52:20.174011   22619 mustload.go:66] Loading cluster: ha-273858
	I1213 08:52:20.174278   22619 notify.go:221] Checking for updates...
	I1213 08:52:20.175077   22619 config.go:182] Loaded profile config "ha-273858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:52:20.175103   22619 status.go:174] checking status of ha-273858 ...
	I1213 08:52:20.177609   22619 status.go:371] ha-273858 host status = "Running" (err=<nil>)
	I1213 08:52:20.177626   22619 host.go:66] Checking if "ha-273858" exists ...
	I1213 08:52:20.180206   22619 main.go:143] libmachine: domain ha-273858 has defined MAC address 52:54:00:43:bd:14 in network mk-ha-273858
	I1213 08:52:20.180765   22619 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:43:bd:14", ip: ""} in network mk-ha-273858: {Iface:virbr1 ExpiryTime:2025-12-13 09:47:00 +0000 UTC Type:0 Mac:52:54:00:43:bd:14 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-273858 Clientid:01:52:54:00:43:bd:14}
	I1213 08:52:20.180793   22619 main.go:143] libmachine: domain ha-273858 has defined IP address 192.168.39.30 and MAC address 52:54:00:43:bd:14 in network mk-ha-273858
	I1213 08:52:20.180936   22619 host.go:66] Checking if "ha-273858" exists ...
	I1213 08:52:20.181119   22619 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:52:20.183576   22619 main.go:143] libmachine: domain ha-273858 has defined MAC address 52:54:00:43:bd:14 in network mk-ha-273858
	I1213 08:52:20.184169   22619 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:43:bd:14", ip: ""} in network mk-ha-273858: {Iface:virbr1 ExpiryTime:2025-12-13 09:47:00 +0000 UTC Type:0 Mac:52:54:00:43:bd:14 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-273858 Clientid:01:52:54:00:43:bd:14}
	I1213 08:52:20.184206   22619 main.go:143] libmachine: domain ha-273858 has defined IP address 192.168.39.30 and MAC address 52:54:00:43:bd:14 in network mk-ha-273858
	I1213 08:52:20.184397   22619 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/ha-273858/id_rsa Username:docker}
	I1213 08:52:20.268452   22619 ssh_runner.go:195] Run: systemctl --version
	I1213 08:52:20.275820   22619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 08:52:20.294628   22619 kubeconfig.go:125] found "ha-273858" server: "https://192.168.39.254:8443"
	I1213 08:52:20.294663   22619 api_server.go:166] Checking apiserver status ...
	I1213 08:52:20.294697   22619 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:52:20.316117   22619 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1431/cgroup
	W1213 08:52:20.327987   22619 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1431/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:52:20.328066   22619 ssh_runner.go:195] Run: ls
	I1213 08:52:20.332938   22619 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1213 08:52:20.337637   22619 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1213 08:52:20.337675   22619 status.go:463] ha-273858 apiserver status = Running (err=<nil>)
	I1213 08:52:20.337702   22619 status.go:176] ha-273858 status: &{Name:ha-273858 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 08:52:20.337718   22619 status.go:174] checking status of ha-273858-m02 ...
	I1213 08:52:20.339381   22619 status.go:371] ha-273858-m02 host status = "Stopped" (err=<nil>)
	I1213 08:52:20.339403   22619 status.go:384] host is not running, skipping remaining checks
	I1213 08:52:20.339410   22619 status.go:176] ha-273858-m02 status: &{Name:ha-273858-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 08:52:20.339427   22619 status.go:174] checking status of ha-273858-m03 ...
	I1213 08:52:20.340874   22619 status.go:371] ha-273858-m03 host status = "Running" (err=<nil>)
	I1213 08:52:20.340889   22619 host.go:66] Checking if "ha-273858-m03" exists ...
	I1213 08:52:20.343703   22619 main.go:143] libmachine: domain ha-273858-m03 has defined MAC address 52:54:00:d7:db:5e in network mk-ha-273858
	I1213 08:52:20.344185   22619 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d7:db:5e", ip: ""} in network mk-ha-273858: {Iface:virbr1 ExpiryTime:2025-12-13 09:48:45 +0000 UTC Type:0 Mac:52:54:00:d7:db:5e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-273858-m03 Clientid:01:52:54:00:d7:db:5e}
	I1213 08:52:20.344216   22619 main.go:143] libmachine: domain ha-273858-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:d7:db:5e in network mk-ha-273858
	I1213 08:52:20.344429   22619 host.go:66] Checking if "ha-273858-m03" exists ...
	I1213 08:52:20.344611   22619 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:52:20.346893   22619 main.go:143] libmachine: domain ha-273858-m03 has defined MAC address 52:54:00:d7:db:5e in network mk-ha-273858
	I1213 08:52:20.347369   22619 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d7:db:5e", ip: ""} in network mk-ha-273858: {Iface:virbr1 ExpiryTime:2025-12-13 09:48:45 +0000 UTC Type:0 Mac:52:54:00:d7:db:5e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-273858-m03 Clientid:01:52:54:00:d7:db:5e}
	I1213 08:52:20.347395   22619 main.go:143] libmachine: domain ha-273858-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:d7:db:5e in network mk-ha-273858
	I1213 08:52:20.347578   22619 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/ha-273858-m03/id_rsa Username:docker}
	I1213 08:52:20.433231   22619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 08:52:20.451599   22619 kubeconfig.go:125] found "ha-273858" server: "https://192.168.39.254:8443"
	I1213 08:52:20.451627   22619 api_server.go:166] Checking apiserver status ...
	I1213 08:52:20.451663   22619 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:52:20.471106   22619 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1784/cgroup
	W1213 08:52:20.484977   22619 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1784/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:52:20.485070   22619 ssh_runner.go:195] Run: ls
	I1213 08:52:20.490458   22619 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1213 08:52:20.495131   22619 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1213 08:52:20.495152   22619 status.go:463] ha-273858-m03 apiserver status = Running (err=<nil>)
	I1213 08:52:20.495159   22619 status.go:176] ha-273858-m03 status: &{Name:ha-273858-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 08:52:20.495173   22619 status.go:174] checking status of ha-273858-m04 ...
	I1213 08:52:20.496806   22619 status.go:371] ha-273858-m04 host status = "Running" (err=<nil>)
	I1213 08:52:20.496821   22619 host.go:66] Checking if "ha-273858-m04" exists ...
	I1213 08:52:20.499561   22619 main.go:143] libmachine: domain ha-273858-m04 has defined MAC address 52:54:00:7d:e6:78 in network mk-ha-273858
	I1213 08:52:20.499984   22619 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7d:e6:78", ip: ""} in network mk-ha-273858: {Iface:virbr1 ExpiryTime:2025-12-13 09:50:15 +0000 UTC Type:0 Mac:52:54:00:7d:e6:78 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-273858-m04 Clientid:01:52:54:00:7d:e6:78}
	I1213 08:52:20.500007   22619 main.go:143] libmachine: domain ha-273858-m04 has defined IP address 192.168.39.219 and MAC address 52:54:00:7d:e6:78 in network mk-ha-273858
	I1213 08:52:20.500150   22619 host.go:66] Checking if "ha-273858-m04" exists ...
	I1213 08:52:20.500357   22619 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:52:20.502392   22619 main.go:143] libmachine: domain ha-273858-m04 has defined MAC address 52:54:00:7d:e6:78 in network mk-ha-273858
	I1213 08:52:20.502810   22619 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7d:e6:78", ip: ""} in network mk-ha-273858: {Iface:virbr1 ExpiryTime:2025-12-13 09:50:15 +0000 UTC Type:0 Mac:52:54:00:7d:e6:78 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-273858-m04 Clientid:01:52:54:00:7d:e6:78}
	I1213 08:52:20.502838   22619 main.go:143] libmachine: domain ha-273858-m04 has defined IP address 192.168.39.219 and MAC address 52:54:00:7d:e6:78 in network mk-ha-273858
	I1213 08:52:20.502970   22619 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/ha-273858-m04/id_rsa Username:docker}
	I1213 08:52:20.589776   22619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 08:52:20.609489   22619 status.go:176] ha-273858-m04 status: &{Name:ha-273858-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (83.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-273858 node start m02 --alsologtostderr -v 5: (36.571419157s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (360.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 stop --alsologtostderr -v 5
E1213 08:53:21.410945    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:53:35.112095    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:55:51.251327    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:56:18.954073    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:56:44.425986    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:56:58.335966    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-273858 stop --alsologtostderr -v 5: (4m9.12571625s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-273858 start --wait true --alsologtostderr -v 5: (1m50.76424104s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (360.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-273858 node delete m03 --alsologtostderr -v 5: (17.888464944s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (250.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 stop --alsologtostderr -v 5
E1213 09:00:51.251524    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:01:44.425855    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:01:58.335982    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:03:07.491559    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-273858 stop --alsologtostderr -v 5: (4m10.904773318s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-273858 status --alsologtostderr -v 5: exit status 7 (62.319451ms)

                                                
                                                
-- stdout --
	ha-273858
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-273858-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-273858-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:03:29.348272   25793 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:03:29.348393   25793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:03:29.348401   25793 out.go:374] Setting ErrFile to fd 2...
	I1213 09:03:29.348406   25793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:03:29.348610   25793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
	I1213 09:03:29.348772   25793 out.go:368] Setting JSON to false
	I1213 09:03:29.348795   25793 mustload.go:66] Loading cluster: ha-273858
	I1213 09:03:29.348911   25793 notify.go:221] Checking for updates...
	I1213 09:03:29.349094   25793 config.go:182] Loaded profile config "ha-273858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:03:29.349106   25793 status.go:174] checking status of ha-273858 ...
	I1213 09:03:29.351203   25793 status.go:371] ha-273858 host status = "Stopped" (err=<nil>)
	I1213 09:03:29.351224   25793 status.go:384] host is not running, skipping remaining checks
	I1213 09:03:29.351232   25793 status.go:176] ha-273858 status: &{Name:ha-273858 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:03:29.351253   25793 status.go:174] checking status of ha-273858-m02 ...
	I1213 09:03:29.352641   25793 status.go:371] ha-273858-m02 host status = "Stopped" (err=<nil>)
	I1213 09:03:29.352655   25793 status.go:384] host is not running, skipping remaining checks
	I1213 09:03:29.352659   25793 status.go:176] ha-273858-m02 status: &{Name:ha-273858-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:03:29.352683   25793 status.go:174] checking status of ha-273858-m04 ...
	I1213 09:03:29.353818   25793 status.go:371] ha-273858-m04 host status = "Stopped" (err=<nil>)
	I1213 09:03:29.353829   25793 status.go:384] host is not running, skipping remaining checks
	I1213 09:03:29.353833   25793 status.go:176] ha-273858-m04 status: &{Name:ha-273858-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (250.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (87.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-273858 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m26.74757011s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (87.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 node add --control-plane --alsologtostderr -v 5
E1213 09:05:51.252258    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-273858 node add --control-plane --alsologtostderr -v 5: (1m13.011315861s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-273858 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-099583 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1213 09:06:44.425895    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:06:58.335016    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:07:14.317586    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-099583 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m14.890970787s)
--- PASS: TestJSONOutput/start/Command (74.89s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-099583 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-099583 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-099583 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-099583 --output=json --user=testUser: (6.824614247s)
--- PASS: TestJSONOutput/stop/Command (6.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-177112 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-177112 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (77.686447ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"52885629-9d18-4620-a53a-b3c3d7220e3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-177112] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a6f9f799-1d32-4713-9536-e9f5a305c4a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22128"}}
	{"specversion":"1.0","id":"abea4fe3-678c-45f7-bc99-301bc982e2e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2f97c5cb-34c0-4145-a12c-34bb95ff644d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22128-5761/kubeconfig"}}
	{"specversion":"1.0","id":"2b7e655a-2ed9-42f1-aa49-8273095e8914","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5761/.minikube"}}
	{"specversion":"1.0","id":"ad9713ab-2352-4ba6-a604-7bae084b6566","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1b03107f-dd9d-4db7-8e5f-f668261a3178","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f34fbbf8-4ec7-4821-a2dd-d55cbee5b842","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-177112" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-177112
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (77.47s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-784071 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-784071 --driver=kvm2  --container-runtime=crio: (37.589812482s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-786302 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-786302 --driver=kvm2  --container-runtime=crio: (37.229918857s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-784071
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-786302
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-786302" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-786302
helpers_test.go:176: Cleaning up "first-784071" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-784071
--- PASS: TestMinikubeProfile (77.47s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (19.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-121151 --memory=3072 --mount-string /tmp/TestMountStartserial487906645/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-121151 --memory=3072 --mount-string /tmp/TestMountStartserial487906645/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.280630086s)
--- PASS: TestMountStart/serial/StartWithMountFirst (19.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-121151 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-121151 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (21.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-134616 --memory=3072 --mount-string /tmp/TestMountStartserial487906645/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-134616 --memory=3072 --mount-string /tmp/TestMountStartserial487906645/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (20.944369774s)
--- PASS: TestMountStart/serial/StartWithMountSecond (21.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-134616 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-134616 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-121151 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-134616 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-134616 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-134616
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-134616: (1.290103794s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.58s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-134616
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-134616: (17.57752442s)
--- PASS: TestMountStart/serial/RestartStopped (18.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-134616 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-134616 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (103.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-613005 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1213 09:10:01.413234    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:10:51.252543    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-613005 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m43.439468263s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (103.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-613005 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-613005 -- rollout status deployment/busybox
E1213 09:11:44.423522    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-613005 -- rollout status deployment/busybox: (4.231953648s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-613005 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-613005 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-613005 -- exec busybox-7b57f96db7-4gdv8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-613005 -- exec busybox-7b57f96db7-65ws6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-613005 -- exec busybox-7b57f96db7-4gdv8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-613005 -- exec busybox-7b57f96db7-65ws6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-613005 -- exec busybox-7b57f96db7-4gdv8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-613005 -- exec busybox-7b57f96db7-65ws6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.78s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-613005 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-613005 -- exec busybox-7b57f96db7-4gdv8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-613005 -- exec busybox-7b57f96db7-4gdv8 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-613005 -- exec busybox-7b57f96db7-65ws6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-613005 -- exec busybox-7b57f96db7-65ws6 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-613005 -v=5 --alsologtostderr
E1213 09:11:58.333347    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-613005 -v=5 --alsologtostderr: (41.23802949s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.68s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-613005 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 cp testdata/cp-test.txt multinode-613005:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 ssh -n multinode-613005 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 cp multinode-613005:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile203812923/001/cp-test_multinode-613005.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 ssh -n multinode-613005 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 cp multinode-613005:/home/docker/cp-test.txt multinode-613005-m02:/home/docker/cp-test_multinode-613005_multinode-613005-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 ssh -n multinode-613005 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 ssh -n multinode-613005-m02 "sudo cat /home/docker/cp-test_multinode-613005_multinode-613005-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 cp multinode-613005:/home/docker/cp-test.txt multinode-613005-m03:/home/docker/cp-test_multinode-613005_multinode-613005-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 ssh -n multinode-613005 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 ssh -n multinode-613005-m03 "sudo cat /home/docker/cp-test_multinode-613005_multinode-613005-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 cp testdata/cp-test.txt multinode-613005-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 ssh -n multinode-613005-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 cp multinode-613005-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile203812923/001/cp-test_multinode-613005-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 ssh -n multinode-613005-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 cp multinode-613005-m02:/home/docker/cp-test.txt multinode-613005:/home/docker/cp-test_multinode-613005-m02_multinode-613005.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 ssh -n multinode-613005-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 ssh -n multinode-613005 "sudo cat /home/docker/cp-test_multinode-613005-m02_multinode-613005.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 cp multinode-613005-m02:/home/docker/cp-test.txt multinode-613005-m03:/home/docker/cp-test_multinode-613005-m02_multinode-613005-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 ssh -n multinode-613005-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 ssh -n multinode-613005-m03 "sudo cat /home/docker/cp-test_multinode-613005-m02_multinode-613005-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 cp testdata/cp-test.txt multinode-613005-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 ssh -n multinode-613005-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 cp multinode-613005-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile203812923/001/cp-test_multinode-613005-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 ssh -n multinode-613005-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 cp multinode-613005-m03:/home/docker/cp-test.txt multinode-613005:/home/docker/cp-test_multinode-613005-m03_multinode-613005.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 ssh -n multinode-613005-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 ssh -n multinode-613005 "sudo cat /home/docker/cp-test_multinode-613005-m03_multinode-613005.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 cp multinode-613005-m03:/home/docker/cp-test.txt multinode-613005-m02:/home/docker/cp-test_multinode-613005-m03_multinode-613005-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 ssh -n multinode-613005-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 ssh -n multinode-613005-m02 "sudo cat /home/docker/cp-test_multinode-613005-m03_multinode-613005-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-613005 node stop m03: (1.546700638s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-613005 status: exit status 7 (318.768084ms)

                                                
                                                
-- stdout --
	multinode-613005
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-613005-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-613005-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-613005 status --alsologtostderr: exit status 7 (334.276233ms)

                                                
                                                
-- stdout --
	multinode-613005
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-613005-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-613005-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:12:40.370167   31361 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:12:40.370435   31361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:12:40.370445   31361 out.go:374] Setting ErrFile to fd 2...
	I1213 09:12:40.370448   31361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:12:40.370644   31361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
	I1213 09:12:40.370804   31361 out.go:368] Setting JSON to false
	I1213 09:12:40.370826   31361 mustload.go:66] Loading cluster: multinode-613005
	I1213 09:12:40.370890   31361 notify.go:221] Checking for updates...
	I1213 09:12:40.371325   31361 config.go:182] Loaded profile config "multinode-613005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:12:40.371346   31361 status.go:174] checking status of multinode-613005 ...
	I1213 09:12:40.373777   31361 status.go:371] multinode-613005 host status = "Running" (err=<nil>)
	I1213 09:12:40.373800   31361 host.go:66] Checking if "multinode-613005" exists ...
	I1213 09:12:40.376482   31361 main.go:143] libmachine: domain multinode-613005 has defined MAC address 52:54:00:dd:38:7e in network mk-multinode-613005
	I1213 09:12:40.376982   31361 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:38:7e", ip: ""} in network mk-multinode-613005: {Iface:virbr1 ExpiryTime:2025-12-13 10:10:14 +0000 UTC Type:0 Mac:52:54:00:dd:38:7e Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-613005 Clientid:01:52:54:00:dd:38:7e}
	I1213 09:12:40.377013   31361 main.go:143] libmachine: domain multinode-613005 has defined IP address 192.168.39.87 and MAC address 52:54:00:dd:38:7e in network mk-multinode-613005
	I1213 09:12:40.377216   31361 host.go:66] Checking if "multinode-613005" exists ...
	I1213 09:12:40.377493   31361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:12:40.379596   31361 main.go:143] libmachine: domain multinode-613005 has defined MAC address 52:54:00:dd:38:7e in network mk-multinode-613005
	I1213 09:12:40.379980   31361 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:38:7e", ip: ""} in network mk-multinode-613005: {Iface:virbr1 ExpiryTime:2025-12-13 10:10:14 +0000 UTC Type:0 Mac:52:54:00:dd:38:7e Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-613005 Clientid:01:52:54:00:dd:38:7e}
	I1213 09:12:40.380002   31361 main.go:143] libmachine: domain multinode-613005 has defined IP address 192.168.39.87 and MAC address 52:54:00:dd:38:7e in network mk-multinode-613005
	I1213 09:12:40.380160   31361 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/multinode-613005/id_rsa Username:docker}
	I1213 09:12:40.464896   31361 ssh_runner.go:195] Run: systemctl --version
	I1213 09:12:40.471688   31361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:12:40.493122   31361 kubeconfig.go:125] found "multinode-613005" server: "https://192.168.39.87:8443"
	I1213 09:12:40.493169   31361 api_server.go:166] Checking apiserver status ...
	I1213 09:12:40.493218   31361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:12:40.517021   31361 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1357/cgroup
	W1213 09:12:40.529606   31361 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1357/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:12:40.529683   31361 ssh_runner.go:195] Run: ls
	I1213 09:12:40.535325   31361 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I1213 09:12:40.540165   31361 api_server.go:279] https://192.168.39.87:8443/healthz returned 200:
	ok
	I1213 09:12:40.540190   31361 status.go:463] multinode-613005 apiserver status = Running (err=<nil>)
	I1213 09:12:40.540201   31361 status.go:176] multinode-613005 status: &{Name:multinode-613005 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:12:40.540220   31361 status.go:174] checking status of multinode-613005-m02 ...
	I1213 09:12:40.542022   31361 status.go:371] multinode-613005-m02 host status = "Running" (err=<nil>)
	I1213 09:12:40.542040   31361 host.go:66] Checking if "multinode-613005-m02" exists ...
	I1213 09:12:40.544748   31361 main.go:143] libmachine: domain multinode-613005-m02 has defined MAC address 52:54:00:b8:a6:ab in network mk-multinode-613005
	I1213 09:12:40.545241   31361 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:a6:ab", ip: ""} in network mk-multinode-613005: {Iface:virbr1 ExpiryTime:2025-12-13 10:11:11 +0000 UTC Type:0 Mac:52:54:00:b8:a6:ab Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-613005-m02 Clientid:01:52:54:00:b8:a6:ab}
	I1213 09:12:40.545265   31361 main.go:143] libmachine: domain multinode-613005-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:b8:a6:ab in network mk-multinode-613005
	I1213 09:12:40.545522   31361 host.go:66] Checking if "multinode-613005-m02" exists ...
	I1213 09:12:40.545726   31361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:12:40.548224   31361 main.go:143] libmachine: domain multinode-613005-m02 has defined MAC address 52:54:00:b8:a6:ab in network mk-multinode-613005
	I1213 09:12:40.548789   31361 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:a6:ab", ip: ""} in network mk-multinode-613005: {Iface:virbr1 ExpiryTime:2025-12-13 10:11:11 +0000 UTC Type:0 Mac:52:54:00:b8:a6:ab Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-613005-m02 Clientid:01:52:54:00:b8:a6:ab}
	I1213 09:12:40.548825   31361 main.go:143] libmachine: domain multinode-613005-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:b8:a6:ab in network mk-multinode-613005
	I1213 09:12:40.549064   31361 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/multinode-613005-m02/id_rsa Username:docker}
	I1213 09:12:40.629043   31361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:12:40.645386   31361 status.go:176] multinode-613005-m02 status: &{Name:multinode-613005-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:12:40.645428   31361 status.go:174] checking status of multinode-613005-m03 ...
	I1213 09:12:40.647210   31361 status.go:371] multinode-613005-m03 host status = "Stopped" (err=<nil>)
	I1213 09:12:40.647233   31361 status.go:384] host is not running, skipping remaining checks
	I1213 09:12:40.647240   31361 status.go:176] multinode-613005-m03 status: &{Name:multinode-613005-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-613005 node start m03 -v=5 --alsologtostderr: (40.40505285s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (322.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-613005
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-613005
E1213 09:15:51.252891    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-613005: (2m49.944147913s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-613005 --wait=true -v=5 --alsologtostderr
E1213 09:16:44.424125    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:16:58.334274    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-613005 --wait=true -v=5 --alsologtostderr: (2m32.830225433s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-613005
--- PASS: TestMultiNode/serial/RestartKeepsNodes (322.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-613005 node delete m03: (2.040520097s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (161.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 stop
E1213 09:19:47.495102    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:20:51.253496    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-613005 stop: (2m41.854628495s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-613005 status: exit status 7 (59.523414ms)

                                                
                                                
-- stdout --
	multinode-613005
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-613005-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-613005 status --alsologtostderr: exit status 7 (60.381038ms)

                                                
                                                
-- stdout --
	multinode-613005
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-613005-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:21:28.930075   34232 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:21:28.930170   34232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:21:28.930176   34232 out.go:374] Setting ErrFile to fd 2...
	I1213 09:21:28.930183   34232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:21:28.930403   34232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
	I1213 09:21:28.930569   34232 out.go:368] Setting JSON to false
	I1213 09:21:28.930594   34232 mustload.go:66] Loading cluster: multinode-613005
	I1213 09:21:28.930652   34232 notify.go:221] Checking for updates...
	I1213 09:21:28.931071   34232 config.go:182] Loaded profile config "multinode-613005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:21:28.931090   34232 status.go:174] checking status of multinode-613005 ...
	I1213 09:21:28.933369   34232 status.go:371] multinode-613005 host status = "Stopped" (err=<nil>)
	I1213 09:21:28.933384   34232 status.go:384] host is not running, skipping remaining checks
	I1213 09:21:28.933388   34232 status.go:176] multinode-613005 status: &{Name:multinode-613005 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:21:28.933404   34232 status.go:174] checking status of multinode-613005-m02 ...
	I1213 09:21:28.934623   34232 status.go:371] multinode-613005-m02 host status = "Stopped" (err=<nil>)
	I1213 09:21:28.934636   34232 status.go:384] host is not running, skipping remaining checks
	I1213 09:21:28.934640   34232 status.go:176] multinode-613005-m02 status: &{Name:multinode-613005-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (161.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (113.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-613005 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1213 09:21:44.424134    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:21:58.333752    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-613005 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.674993055s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-613005 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (113.14s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-613005
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-613005-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-613005-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (76.1993ms)

                                                
                                                
-- stdout --
	* [multinode-613005-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-5761/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5761/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-613005-m02' is duplicated with machine name 'multinode-613005-m02' in profile 'multinode-613005'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-613005-m03 --driver=kvm2  --container-runtime=crio
E1213 09:23:54.321659    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-613005-m03 --driver=kvm2  --container-runtime=crio: (36.977494653s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-613005
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-613005: exit status 80 (199.472568ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-613005 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-613005-m03 already exists in multinode-613005-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-613005-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.12s)

                                                
                                    
x
+
TestScheduledStopUnix (107.5s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-902123 --memory=3072 --driver=kvm2  --container-runtime=crio
E1213 09:26:41.415095    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:26:44.425461    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:26:58.336308    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-902123 --memory=3072 --driver=kvm2  --container-runtime=crio: (35.92388344s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-902123 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 09:27:04.399538   36590 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:27:04.399858   36590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:27:04.399874   36590 out.go:374] Setting ErrFile to fd 2...
	I1213 09:27:04.399884   36590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:27:04.400219   36590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
	I1213 09:27:04.400576   36590 out.go:368] Setting JSON to false
	I1213 09:27:04.400694   36590 mustload.go:66] Loading cluster: scheduled-stop-902123
	I1213 09:27:04.401213   36590 config.go:182] Loaded profile config "scheduled-stop-902123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:27:04.401332   36590 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/scheduled-stop-902123/config.json ...
	I1213 09:27:04.401564   36590 mustload.go:66] Loading cluster: scheduled-stop-902123
	I1213 09:27:04.401708   36590 config.go:182] Loaded profile config "scheduled-stop-902123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-902123 -n scheduled-stop-902123
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-902123 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 09:27:04.687185   36635 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:27:04.687459   36635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:27:04.687470   36635 out.go:374] Setting ErrFile to fd 2...
	I1213 09:27:04.687475   36635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:27:04.687693   36635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
	I1213 09:27:04.687907   36635 out.go:368] Setting JSON to false
	I1213 09:27:04.688122   36635 daemonize_unix.go:73] killing process 36624 as it is an old scheduled stop
	I1213 09:27:04.688214   36635 mustload.go:66] Loading cluster: scheduled-stop-902123
	I1213 09:27:04.688591   36635 config.go:182] Loaded profile config "scheduled-stop-902123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:27:04.688662   36635 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/scheduled-stop-902123/config.json ...
	I1213 09:27:04.688849   36635 mustload.go:66] Loading cluster: scheduled-stop-902123
	I1213 09:27:04.688942   36635 config.go:182] Loaded profile config "scheduled-stop-902123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1213 09:27:04.694708    9697 retry.go:31] will retry after 64.706µs: open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/scheduled-stop-902123/pid: no such file or directory
I1213 09:27:04.695824    9697 retry.go:31] will retry after 79.491µs: open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/scheduled-stop-902123/pid: no such file or directory
I1213 09:27:04.696985    9697 retry.go:31] will retry after 331.094µs: open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/scheduled-stop-902123/pid: no such file or directory
I1213 09:27:04.698137    9697 retry.go:31] will retry after 449.083µs: open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/scheduled-stop-902123/pid: no such file or directory
I1213 09:27:04.699284    9697 retry.go:31] will retry after 697.551µs: open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/scheduled-stop-902123/pid: no such file or directory
I1213 09:27:04.700484    9697 retry.go:31] will retry after 768.774µs: open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/scheduled-stop-902123/pid: no such file or directory
I1213 09:27:04.701643    9697 retry.go:31] will retry after 1.027369ms: open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/scheduled-stop-902123/pid: no such file or directory
I1213 09:27:04.702809    9697 retry.go:31] will retry after 1.857514ms: open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/scheduled-stop-902123/pid: no such file or directory
I1213 09:27:04.705087    9697 retry.go:31] will retry after 1.647728ms: open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/scheduled-stop-902123/pid: no such file or directory
I1213 09:27:04.707314    9697 retry.go:31] will retry after 3.443054ms: open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/scheduled-stop-902123/pid: no such file or directory
I1213 09:27:04.711555    9697 retry.go:31] will retry after 6.347602ms: open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/scheduled-stop-902123/pid: no such file or directory
I1213 09:27:04.718864    9697 retry.go:31] will retry after 11.452075ms: open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/scheduled-stop-902123/pid: no such file or directory
I1213 09:27:04.731173    9697 retry.go:31] will retry after 8.521584ms: open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/scheduled-stop-902123/pid: no such file or directory
I1213 09:27:04.740463    9697 retry.go:31] will retry after 16.56974ms: open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/scheduled-stop-902123/pid: no such file or directory
I1213 09:27:04.757802    9697 retry.go:31] will retry after 25.052253ms: open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/scheduled-stop-902123/pid: no such file or directory
I1213 09:27:04.783010    9697 retry.go:31] will retry after 24.657036ms: open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/scheduled-stop-902123/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-902123 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-902123 -n scheduled-stop-902123
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-902123
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-902123 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 09:27:30.372272   36784 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:27:30.372384   36784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:27:30.372395   36784 out.go:374] Setting ErrFile to fd 2...
	I1213 09:27:30.372401   36784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:27:30.372638   36784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
	I1213 09:27:30.372909   36784 out.go:368] Setting JSON to false
	I1213 09:27:30.372999   36784 mustload.go:66] Loading cluster: scheduled-stop-902123
	I1213 09:27:30.373349   36784 config.go:182] Loaded profile config "scheduled-stop-902123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:27:30.373432   36784 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/scheduled-stop-902123/config.json ...
	I1213 09:27:30.373639   36784 mustload.go:66] Loading cluster: scheduled-stop-902123
	I1213 09:27:30.373751   36784 config.go:182] Loaded profile config "scheduled-stop-902123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-902123
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-902123: exit status 7 (60.793029ms)

                                                
                                                
-- stdout --
	scheduled-stop-902123
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-902123 -n scheduled-stop-902123
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-902123 -n scheduled-stop-902123: exit status 7 (59.476987ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-902123" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-902123
--- PASS: TestScheduledStopUnix (107.50s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (459.06s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3745893390 start -p running-upgrade-593345 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3745893390 start -p running-upgrade-593345 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m18.444988065s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-593345 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-593345 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (6m16.468798504s)
helpers_test.go:176: Cleaning up "running-upgrade-593345" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-593345
--- PASS: TestRunningBinaryUpgrade (459.06s)

                                                
                                    
x
+
TestKubernetesUpgrade (155.78s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-358743 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-358743 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m0.16560243s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-358743
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-358743: (1.994764394s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-358743 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-358743 status --format={{.Host}}: exit status 7 (76.216863ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-358743 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1213 09:31:58.333595    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-358743 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.396573169s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-358743 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-358743 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-358743 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (81.976367ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-358743] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-5761/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5761/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-358743
	    minikube start -p kubernetes-upgrade-358743 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3587432 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-358743 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-358743 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-358743 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.956194357s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-358743" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-358743
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-358743: (1.042368585s)
--- PASS: TestKubernetesUpgrade (155.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-194885 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-194885 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (100.88918ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-194885] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-5761/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5761/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (77.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-194885 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-194885 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m17.341918394s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-194885 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (77.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-300821 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-300821 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (120.690169ms)

                                                
                                                
-- stdout --
	* [false-300821] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-5761/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5761/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:28:19.453970   37865 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:28:19.454076   37865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:28:19.454082   37865 out.go:374] Setting ErrFile to fd 2...
	I1213 09:28:19.454089   37865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:28:19.454277   37865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
	I1213 09:28:19.454767   37865 out.go:368] Setting JSON to false
	I1213 09:28:19.455671   37865 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4243,"bootTime":1765613856,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:28:19.455725   37865 start.go:143] virtualization: kvm guest
	I1213 09:28:19.457979   37865 out.go:179] * [false-300821] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:28:19.459472   37865 notify.go:221] Checking for updates...
	I1213 09:28:19.459545   37865 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:28:19.460903   37865 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:28:19.462396   37865 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5761/kubeconfig
	I1213 09:28:19.463640   37865 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5761/.minikube
	I1213 09:28:19.464780   37865 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:28:19.465994   37865 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:28:19.467595   37865 config.go:182] Loaded profile config "NoKubernetes-194885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:28:19.467725   37865 config.go:182] Loaded profile config "force-systemd-env-263328": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:28:19.467856   37865 config.go:182] Loaded profile config "offline-crio-193786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:28:19.467962   37865 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:28:19.506925   37865 out.go:179] * Using the kvm2 driver based on user configuration
	I1213 09:28:19.508382   37865 start.go:309] selected driver: kvm2
	I1213 09:28:19.508402   37865 start.go:927] validating driver "kvm2" against <nil>
	I1213 09:28:19.508413   37865 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:28:19.510380   37865 out.go:203] 
	W1213 09:28:19.511715   37865 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1213 09:28:19.513260   37865 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-300821 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-300821

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-300821

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-300821

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-300821

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-300821

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-300821

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-300821

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-300821

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-300821

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-300821

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-300821

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-300821" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-300821" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-300821

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-300821"

                                                
                                                
----------------------- debugLogs end: false-300821 [took: 3.257054155s] --------------------------------
helpers_test.go:176: Cleaning up "false-300821" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-300821
--- PASS: TestNetworkPlugins/group/false (3.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (32.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-194885 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-194885 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (31.152025074s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-194885 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-194885 status -o json: exit status 2 (228.313523ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-194885","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-194885
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (32.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (38.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-194885 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-194885 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (38.620123812s)
--- PASS: TestNoKubernetes/serial/Start (38.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22128-5761/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-194885 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-194885 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.520723ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-194885
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-194885: (1.347229817s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (55.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-194885 --driver=kvm2  --container-runtime=crio
E1213 09:30:51.253143    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-194885 --driver=kvm2  --container-runtime=crio: (55.262409213s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (55.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-194885 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-194885 "sudo systemctl is-active --quiet service kubelet": exit status 1 (163.193397ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E1213 09:31:44.423832    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStoppedBinaryUpgrade/Setup (3.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (70.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1070614556 start -p stopped-upgrade-064616 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1070614556 start -p stopped-upgrade-064616 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (38.284899802s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1070614556 -p stopped-upgrade-064616 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1070614556 -p stopped-upgrade-064616 stop: (1.843597667s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-064616 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-064616 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (30.848170871s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (70.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-064616
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-064616: (1.117028308s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                    
x
+
TestISOImage/Setup (23.49s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-414174 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-414174 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.486079304s)
--- PASS: TestISOImage/Setup (23.49s)

                                                
                                    
x
+
TestPause/serial/Start (94.14s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-832760 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-832760 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m34.138220157s)
--- PASS: TestPause/serial/Start (94.14s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-414174 ssh "which crictl"
E1213 09:41:44.423264    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/crictl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-414174 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-414174 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-414174 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-414174 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-414174 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-414174 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-414174 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-414174 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-414174 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-414174 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (102.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-300821 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-300821 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m42.522977207s)
--- PASS: TestNetworkPlugins/group/auto/Start (102.52s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.27s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-832760 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-832760 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (35.234625906s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-300821 "pgrep -a kubelet"
I1213 09:35:05.719578    9697 config.go:182] Loaded profile config "auto-300821": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-300821 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-gbn8p" [1caf4fc0-c65c-4e86-9a11-86b0b4568f43] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-gbn8p" [1caf4fc0-c65c-4e86-9a11-86b0b4568f43] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004743645s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (59.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-300821 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-300821 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (59.727416568s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (59.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-300821 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-300821 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-300821 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestPause/serial/Pause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-832760 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.22s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-832760 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-832760 --output=json --layout=cluster: exit status 2 (217.567594ms)

                                                
                                                
-- stdout --
	{"Name":"pause-832760","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-832760","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.22s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-832760 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-300821 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-300821 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m17.155517703s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.16s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.86s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-832760 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.86s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.88s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-832760 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.56s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.557689099s)
--- PASS: TestPause/serial/VerifyDeletedResources (1.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (92.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-300821 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1213 09:35:51.251328    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-300821 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m32.600541819s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (92.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-qbkzl" [528d858c-0f2d-42c2-808b-5057e8269d12] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005913161s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-300821 "pgrep -a kubelet"
I1213 09:36:19.130396    9697 config.go:182] Loaded profile config "kindnet-300821": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-300821 replace --force -f testdata/netcat-deployment.yaml
I1213 09:36:19.427115    9697 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-j9t7m" [d8662544-ae6d-41e1-8716-266900e92ae9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-j9t7m" [d8662544-ae6d-41e1-8716-266900e92ae9] Running
E1213 09:36:27.496927    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-014502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.005616701s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-300821 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-300821 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-300821 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-300821 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-300821 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m25.964376783s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-4kpss" [3c2d79b5-78f8-4e34-9e79-5ca652dcfa5b] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-4kpss" [3c2d79b5-78f8-4e34-9e79-5ca652dcfa5b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005274105s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-300821 "pgrep -a kubelet"
I1213 09:36:55.102946    9697 config.go:182] Loaded profile config "calico-300821": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-300821 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-cpwqt" [a7ceb916-66ef-40e3-8bbb-1ebce4d0a812] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 09:36:58.333450    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-cpwqt" [a7ceb916-66ef-40e3-8bbb-1ebce4d0a812] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005163052s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-300821 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-300821 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-300821 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-300821 "pgrep -a kubelet"
I1213 09:37:08.620582    9697 config.go:182] Loaded profile config "custom-flannel-300821": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-300821 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-qrvlx" [7fa943fe-08b9-4dde-910b-0639eade57f4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-qrvlx" [7fa943fe-08b9-4dde-910b-0639eade57f4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004779619s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-300821 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-300821 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-300821 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (72.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-300821 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-300821 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m12.646685918s)
--- PASS: TestNetworkPlugins/group/flannel/Start (72.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (65.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-300821 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-300821 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m5.110275766s)
--- PASS: TestNetworkPlugins/group/bridge/Start (65.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (79.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-856148 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-856148 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m19.661126926s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (79.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-300821 "pgrep -a kubelet"
I1213 09:38:14.786357    9697 config.go:182] Loaded profile config "enable-default-cni-300821": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-300821 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-9rb5d" [5eed6616-a3bf-4997-a835-6e6842b6b7b9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-9rb5d" [5eed6616-a3bf-4997-a835-6e6842b6b7b9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004843759s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-300821 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-300821 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-300821 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-qzk8j" [5d6f49d1-25f5-40af-8cfd-d895c20dc1cb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005090513s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-300821 "pgrep -a kubelet"
I1213 09:38:40.533475    9697 config.go:182] Loaded profile config "bridge-300821": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-300821 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-mxfsp" [d1fa73a8-bd21-4fec-ac41-34c43190b80f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-mxfsp" [d1fa73a8-bd21-4fec-ac41-34c43190b80f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.00470958s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (95.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-915984 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-915984 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m35.747513126s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (95.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-300821 "pgrep -a kubelet"
I1213 09:38:43.153746    9697 config.go:182] Loaded profile config "flannel-300821": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-300821 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-ksr8m" [caf135c1-8020-4710-8764-44a33f5fec3a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-ksr8m" [caf135c1-8020-4710-8764-44a33f5fec3a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005701882s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-300821 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-300821 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-300821 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-300821 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-300821 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-300821 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-856148 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [5c38b416-63ab-4e3b-a28c-9e0c927129c0] Pending
helpers_test.go:353: "busybox" [5c38b416-63ab-4e3b-a28c-9e0c927129c0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [5c38b416-63ab-4e3b-a28c-9e0c927129c0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004049562s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-856148 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-794793 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-794793 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m23.788808424s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (58.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-748644 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-748644 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (58.060980448s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (58.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-856148 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-856148 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.101551672s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-856148 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (76.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-856148 --alsologtostderr -v=3
E1213 09:40:05.990104    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/auto-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:40:05.996605    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/auto-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:40:06.008107    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/auto-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:40:06.029671    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/auto-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:40:06.071180    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/auto-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:40:06.153363    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/auto-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:40:06.315009    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/auto-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:40:06.637133    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/auto-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:40:07.278867    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/auto-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:40:08.560577    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/auto-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-856148 --alsologtostderr -v=3: (1m16.395145222s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (76.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-748644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-748644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.006156556s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-748644 --alsologtostderr -v=3
E1213 09:40:11.122246    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/auto-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:40:16.243785    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/auto-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-748644 --alsologtostderr -v=3: (7.092853716s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-748644 -n newest-cni-748644
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-748644 -n newest-cni-748644: exit status 7 (61.659146ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-748644 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (33.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-748644 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-748644 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (32.741706532s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-748644 -n newest-cni-748644
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (33.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-915984 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [dcd67709-7224-4288-b527-968414651dc0] Pending
helpers_test.go:353: "busybox" [dcd67709-7224-4288-b527-968414651dc0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [dcd67709-7224-4288-b527-968414651dc0] Running
E1213 09:40:26.485529    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/auto-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.006135462s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-915984 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-915984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-915984 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (88.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-915984 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-915984 --alsologtostderr -v=3: (1m28.890463164s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (88.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-856148 -n old-k8s-version-856148
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-856148 -n old-k8s-version-856148: exit status 7 (67.918702ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-856148 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-856148 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-856148 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (45.647906942s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-856148 -n old-k8s-version-856148
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-794793 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [ca82dc20-3849-497f-867b-0103f2be4cee] Pending
helpers_test.go:353: "busybox" [ca82dc20-3849-497f-867b-0103f2be4cee] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1213 09:40:34.323572    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [ca82dc20-3849-497f-867b-0103f2be4cee] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.007484798s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-794793 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-794793 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-794793 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.008395359s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-794793 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (86.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-794793 --alsologtostderr -v=3
E1213 09:40:46.967068    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/auto-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-794793 --alsologtostderr -v=3: (1m26.756310113s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (86.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-748644 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-748644 --alsologtostderr -v=1
E1213 09:40:51.251564    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/functional-589798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-748644 -n newest-cni-748644
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-748644 -n newest-cni-748644: exit status 2 (216.600218ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-748644 -n newest-cni-748644
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-748644 -n newest-cni-748644: exit status 2 (217.764753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-748644 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-748644 -n newest-cni-748644
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-748644 -n newest-cni-748644
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-637234 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1213 09:41:12.915099    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/kindnet-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:12.921472    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/kindnet-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:12.932944    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/kindnet-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:12.954494    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/kindnet-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:12.995968    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/kindnet-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:13.077579    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/kindnet-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:13.239544    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/kindnet-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:13.560866    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/kindnet-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:14.202733    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/kindnet-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:15.485404    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/kindnet-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-637234 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m22.981566597s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (82.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (19.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-hf846" [5ee0739b-886d-41ee-9913-4344c888962c] Pending
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-hf846" [5ee0739b-886d-41ee-9913-4344c888962c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1213 09:41:18.046807    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/kindnet-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:23.168833    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/kindnet-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:27.929749    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/auto-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-hf846" [5ee0739b-886d-41ee-9913-4344c888962c] Running
E1213 09:41:33.410686    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/kindnet-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.004992537s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (19.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-hf846" [5ee0739b-886d-41ee-9913-4344c888962c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00419255s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-856148 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-856148 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-856148 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-856148 -n old-k8s-version-856148
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-856148 -n old-k8s-version-856148: exit status 2 (216.595945ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-856148 -n old-k8s-version-856148
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-856148 -n old-k8s-version-856148: exit status 2 (213.520928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-856148 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-856148 -n old-k8s-version-856148
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-856148 -n old-k8s-version-856148
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.53s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-414174 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.16s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-414174 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.16s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.16s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-414174 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.16s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.16s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-414174 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.16s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.16s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-414174 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.16s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.16s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-414174 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.16s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.16s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-414174 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.16s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.16s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-414174 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   kicbase_version: v0.0.48-1765275396-22083
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: 28bc9824e3c85d2e3519912c2810d5729ab9ce8c
iso_test.go:118:   iso_version: v1.37.0-1765481609-22101
--- PASS: TestISOImage/VersionJSON (0.16s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.16s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-414174 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.16s)
E1213 09:41:48.891320    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/calico-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:48.897736    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/calico-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:48.909141    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/calico-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:48.930636    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/calico-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:48.972129    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/calico-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:49.053683    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/calico-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:49.215282    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/calico-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:49.537274    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/calico-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:50.179422    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/calico-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:51.461170    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/calico-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:53.892325    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/kindnet-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:54.022939    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/calico-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-915984 -n no-preload-915984
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-915984 -n no-preload-915984: exit status 7 (60.650473ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-915984 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (54.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-915984 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1213 09:41:58.332966    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:41:59.144536    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/calico-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:42:08.862393    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/custom-flannel-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:42:08.868819    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/custom-flannel-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:42:08.880277    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/custom-flannel-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:42:08.901904    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/custom-flannel-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:42:08.943395    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/custom-flannel-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:42:09.024863    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/custom-flannel-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:42:09.186461    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/custom-flannel-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:42:09.386044    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/calico-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:42:09.508499    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/custom-flannel-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-915984 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (54.26910716s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-915984 -n no-preload-915984
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (54.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-794793 -n default-k8s-diff-port-794793
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-794793 -n default-k8s-diff-port-794793: exit status 7 (64.685941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-794793 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-794793 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1213 09:42:10.150150    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/custom-flannel-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:42:11.432035    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/custom-flannel-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:42:13.994212    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/custom-flannel-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-794793 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (45.600072219s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-794793 -n default-k8s-diff-port-794793
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-637234 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [d1495e6a-89cb-454b-b14c-1917381b5e4d] Pending
helpers_test.go:353: "busybox" [d1495e6a-89cb-454b-b14c-1917381b5e4d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1213 09:42:19.115871    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/custom-flannel-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [d1495e6a-89cb-454b-b14c-1917381b5e4d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.005939204s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-637234 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-637234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1213 09:42:29.357331    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/custom-flannel-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:42:29.867520    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/calico-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-637234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.012093943s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-637234 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (85.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-637234 --alsologtostderr -v=3
E1213 09:42:34.854131    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/kindnet-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:42:49.838600    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/custom-flannel-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:42:49.852091    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/auto-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-637234 --alsologtostderr -v=3: (1m25.62780801s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (85.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-pvpfl" [bb10f24a-1fa7-46a8-bcd0-801c3b7f1546] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004129248s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-5l2rn" [de1d34da-5c52-49d6-9051-53933cafd666] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-5l2rn" [de1d34da-5c52-49d6-9051-53933cafd666] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.003390863s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-pvpfl" [bb10f24a-1fa7-46a8-bcd0-801c3b7f1546] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004006908s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-915984 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-5l2rn" [de1d34da-5c52-49d6-9051-53933cafd666] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003315589s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-794793 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-915984 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-915984 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-915984 -n no-preload-915984
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-915984 -n no-preload-915984: exit status 2 (210.007348ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-915984 -n no-preload-915984
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-915984 -n no-preload-915984: exit status 2 (213.599187ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-915984 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-915984 -n no-preload-915984
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-915984 -n no-preload-915984
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-794793 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-794793 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-794793 -n default-k8s-diff-port-794793
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-794793 -n default-k8s-diff-port-794793: exit status 2 (216.464988ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-794793 -n default-k8s-diff-port-794793
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-794793 -n default-k8s-diff-port-794793: exit status 2 (214.249792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-794793 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-794793 -n default-k8s-diff-port-794793
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-794793 -n default-k8s-diff-port-794793
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-637234 -n embed-certs-637234
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-637234 -n embed-certs-637234: exit status 7 (61.135912ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-637234 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (41.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-637234 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1213 09:43:56.028992    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/enable-default-cni-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:43:56.776151    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/kindnet-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:43:57.434463    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/flannel-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:44:01.156986    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/old-k8s-version-856148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:44:01.163485    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/old-k8s-version-856148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:44:01.174885    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/old-k8s-version-856148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:44:01.196409    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/old-k8s-version-856148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:44:01.238027    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/old-k8s-version-856148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:44:01.300840    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/bridge-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:44:01.320345    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/old-k8s-version-856148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:44:01.482018    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/old-k8s-version-856148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:44:01.803778    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/old-k8s-version-856148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:44:02.445919    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/old-k8s-version-856148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:44:03.727902    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/old-k8s-version-856148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:44:06.290134    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/old-k8s-version-856148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:44:11.412025    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/old-k8s-version-856148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:44:17.916687    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/flannel-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:44:21.653884    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/old-k8s-version-856148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:44:21.782449    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/bridge-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:44:32.751179    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/calico-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:44:36.991458    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/enable-default-cni-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-637234 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (41.488860247s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-637234 -n embed-certs-637234
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (41.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-fz694" [0884123e-4ff9-4b88-b085-896c40506582] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1213 09:44:42.135895    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/old-k8s-version-856148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-fz694" [0884123e-4ff9-4b88-b085-896c40506582] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.003950188s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-fz694" [0884123e-4ff9-4b88-b085-896c40506582] Running
E1213 09:44:52.722472    9697 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/custom-flannel-300821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003835813s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-637234 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-637234 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-637234 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-637234 -n embed-certs-637234
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-637234 -n embed-certs-637234: exit status 2 (209.121643ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-637234 -n embed-certs-637234
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-637234 -n embed-certs-637234: exit status 2 (208.529288ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-637234 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-637234 -n embed-certs-637234
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-637234 -n embed-certs-637234
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.48s)

                                                
                                    

Test skip (52/437)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.29
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
151 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
152 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
153 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
154 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
155 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
156 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
157 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
158 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
359 TestNetworkPlugins/group/kubenet 3.48
368 TestNetworkPlugins/group/cilium 3.83
385 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-917695 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-300821 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-300821

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-300821

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-300821

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-300821

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-300821

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-300821

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-300821

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-300821

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-300821

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-300821

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-300821

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-300821" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-300821" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-300821

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-300821"

                                                
                                                
----------------------- debugLogs end: kubenet-300821 [took: 3.288327597s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-300821" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-300821
--- SKIP: TestNetworkPlugins/group/kubenet (3.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-300821 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-300821

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-300821

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-300821

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-300821

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-300821

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-300821

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-300821

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-300821

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-300821

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-300821

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-300821

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-300821" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-300821

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-300821

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-300821

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-300821

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-300821" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-300821" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-300821

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-300821" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-300821"

                                                
                                                
----------------------- debugLogs end: cilium-300821 [took: 3.640353506s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-300821" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-300821
--- SKIP: TestNetworkPlugins/group/cilium (3.83s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-606337" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-606337
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard