Test Report: KVM_Linux_crio 21969

                    
                      ab0a8cfdd326918695f502976b3bdb249954a688:2025-11-23:42465
                    
                

Test fail (2/351)

Order failed test Duration
37 TestAddons/parallel/Ingress 156.34
244 TestPreload 124.62
x
+
TestAddons/parallel/Ingress (156.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-964416 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-964416 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-964416 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [16094845-c835-4494-a064-31053be1943b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [16094845-c835-4494-a064-31053be1943b] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.010586526s
I1123 08:13:55.084923   18055 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-964416 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-964416 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.65419167s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-964416 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-964416 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.198
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-964416 -n addons-964416
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-964416 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-964416 logs -n 25: (1.302038733s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-334487                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-334487 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │ 23 Nov 25 08:10 UTC │
	│ start   │ --download-only -p binary-mirror-588509 --alsologtostderr --binary-mirror http://127.0.0.1:36055 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-588509 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │                     │
	│ delete  │ -p binary-mirror-588509                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-588509 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │ 23 Nov 25 08:10 UTC │
	│ addons  │ disable dashboard -p addons-964416                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │                     │
	│ addons  │ enable dashboard -p addons-964416                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │                     │
	│ start   │ -p addons-964416 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │ 23 Nov 25 08:13 UTC │
	│ addons  │ addons-964416 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:13 UTC │
	│ addons  │ addons-964416 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:13 UTC │
	│ addons  │ enable headlamp -p addons-964416 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:13 UTC │
	│ addons  │ addons-964416 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:13 UTC │
	│ addons  │ addons-964416 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:13 UTC │
	│ addons  │ addons-964416 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:13 UTC │
	│ ip      │ addons-964416 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:13 UTC │
	│ addons  │ addons-964416 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:13 UTC │
	│ addons  │ addons-964416 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:13 UTC │
	│ ssh     │ addons-964416 ssh cat /opt/local-path-provisioner/pvc-cd89c1fc-4685-472d-9496-2945ce215720_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:13 UTC │
	│ addons  │ addons-964416 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:14 UTC │
	│ addons  │ addons-964416 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:14 UTC │
	│ ssh     │ addons-964416 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │                     │
	│ addons  │ addons-964416 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:14 UTC │ 23 Nov 25 08:14 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-964416                                                                                                                                                                                                                                                                                                                                                                                         │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:14 UTC │ 23 Nov 25 08:14 UTC │
	│ addons  │ addons-964416 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:14 UTC │ 23 Nov 25 08:14 UTC │
	│ addons  │ addons-964416 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:14 UTC │ 23 Nov 25 08:14 UTC │
	│ addons  │ addons-964416 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:14 UTC │ 23 Nov 25 08:14 UTC │
	│ ip      │ addons-964416 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-964416        │ jenkins │ v1.37.0 │ 23 Nov 25 08:16 UTC │ 23 Nov 25 08:16 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:10:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:10:58.036860   18653 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:10:58.036939   18653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:10:58.036947   18653 out.go:374] Setting ErrFile to fd 2...
	I1123 08:10:58.036951   18653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:10:58.037107   18653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-14048/.minikube/bin
	I1123 08:10:58.037615   18653 out.go:368] Setting JSON to false
	I1123 08:10:58.038378   18653 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3207,"bootTime":1763882251,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:10:58.038440   18653 start.go:143] virtualization: kvm guest
	I1123 08:10:58.040243   18653 out.go:179] * [addons-964416] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:10:58.041460   18653 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:10:58.041472   18653 notify.go:221] Checking for updates...
	I1123 08:10:58.043934   18653 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:10:58.045121   18653 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-14048/kubeconfig
	I1123 08:10:58.046299   18653 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-14048/.minikube
	I1123 08:10:58.047483   18653 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:10:58.048567   18653 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:10:58.049864   18653 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:10:58.079490   18653 out.go:179] * Using the kvm2 driver based on user configuration
	I1123 08:10:58.080607   18653 start.go:309] selected driver: kvm2
	I1123 08:10:58.080617   18653 start.go:927] validating driver "kvm2" against <nil>
	I1123 08:10:58.080627   18653 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:10:58.081236   18653 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:10:58.081452   18653 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:10:58.081498   18653 cni.go:84] Creating CNI manager for ""
	I1123 08:10:58.081549   18653 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1123 08:10:58.081559   18653 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1123 08:10:58.081610   18653 start.go:353] cluster config:
	{Name:addons-964416 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-964416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1123 08:10:58.081720   18653 iso.go:125] acquiring lock: {Name:mk4b6da1d874cbf82d9df128fb5e9a0d9b7ea794 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:10:58.083019   18653 out.go:179] * Starting "addons-964416" primary control-plane node in "addons-964416" cluster
	I1123 08:10:58.084211   18653 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:10:58.084234   18653 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-14048/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 08:10:58.084240   18653 cache.go:65] Caching tarball of preloaded images
	I1123 08:10:58.084325   18653 preload.go:238] Found /home/jenkins/minikube-integration/21969-14048/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 08:10:58.084334   18653 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:10:58.084637   18653 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/config.json ...
	I1123 08:10:58.084658   18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/config.json: {Name:mkf7d715d976f8cb8c0bc303642b8a0651fc1f32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:10:58.084776   18653 start.go:360] acquireMachinesLock for addons-964416: {Name:mk2573900f00f8e3cbe200607276d61a844e85b7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1123 08:10:58.084821   18653 start.go:364] duration metric: took 33.261µs to acquireMachinesLock for "addons-964416"
	I1123 08:10:58.084837   18653 start.go:93] Provisioning new machine with config: &{Name:addons-964416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-964416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:10:58.084878   18653 start.go:125] createHost starting for "" (driver="kvm2")
	I1123 08:10:58.086737   18653 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1123 08:10:58.086876   18653 start.go:159] libmachine.API.Create for "addons-964416" (driver="kvm2")
	I1123 08:10:58.086904   18653 client.go:173] LocalClient.Create starting
	I1123 08:10:58.086971   18653 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca.pem
	I1123 08:10:58.286256   18653 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/cert.pem
	I1123 08:10:58.372382   18653 main.go:143] libmachine: creating domain...
	I1123 08:10:58.372406   18653 main.go:143] libmachine: creating network...
	I1123 08:10:58.373672   18653 main.go:143] libmachine: found existing default network
	I1123 08:10:58.373848   18653 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1123 08:10:58.374377   18653 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d9e4f0}
	I1123 08:10:58.374483   18653 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-964416</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1123 08:10:58.380291   18653 main.go:143] libmachine: creating private network mk-addons-964416 192.168.39.0/24...
	I1123 08:10:58.442042   18653 main.go:143] libmachine: private network mk-addons-964416 192.168.39.0/24 created
	I1123 08:10:58.442335   18653 main.go:143] libmachine: <network>
	  <name>mk-addons-964416</name>
	  <uuid>71fd788f-ed2f-4bfe-aa4f-90ed1672fe6a</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:ec:79:b3'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1123 08:10:58.442364   18653 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416 ...
	I1123 08:10:58.442384   18653 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21969-14048/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1123 08:10:58.442393   18653 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21969-14048/.minikube
	I1123 08:10:58.442449   18653 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21969-14048/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21969-14048/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1123 08:10:58.693544   18653 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa...
	I1123 08:10:58.761884   18653 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/addons-964416.rawdisk...
	I1123 08:10:58.761930   18653 main.go:143] libmachine: Writing magic tar header
	I1123 08:10:58.761953   18653 main.go:143] libmachine: Writing SSH key tar header
	I1123 08:10:58.762029   18653 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416 ...
	I1123 08:10:58.762095   18653 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416
	I1123 08:10:58.762130   18653 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416 (perms=drwx------)
	I1123 08:10:58.762147   18653 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21969-14048/.minikube/machines
	I1123 08:10:58.762161   18653 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21969-14048/.minikube/machines (perms=drwxr-xr-x)
	I1123 08:10:58.762174   18653 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21969-14048/.minikube
	I1123 08:10:58.762187   18653 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21969-14048/.minikube (perms=drwxr-xr-x)
	I1123 08:10:58.762201   18653 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21969-14048
	I1123 08:10:58.762211   18653 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21969-14048 (perms=drwxrwxr-x)
	I1123 08:10:58.762219   18653 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1123 08:10:58.762227   18653 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1123 08:10:58.762238   18653 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1123 08:10:58.762246   18653 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1123 08:10:58.762254   18653 main.go:143] libmachine: checking permissions on dir: /home
	I1123 08:10:58.762262   18653 main.go:143] libmachine: skipping /home - not owner
	I1123 08:10:58.762266   18653 main.go:143] libmachine: defining domain...
	I1123 08:10:58.763412   18653 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-964416</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/addons-964416.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-964416'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1123 08:10:58.770763   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:bd:1f:e4 in network default
	I1123 08:10:58.771292   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:10:58.771310   18653 main.go:143] libmachine: starting domain...
	I1123 08:10:58.771314   18653 main.go:143] libmachine: ensuring networks are active...
	I1123 08:10:58.771925   18653 main.go:143] libmachine: Ensuring network default is active
	I1123 08:10:58.772212   18653 main.go:143] libmachine: Ensuring network mk-addons-964416 is active
	I1123 08:10:58.772783   18653 main.go:143] libmachine: getting domain XML...
	I1123 08:10:58.773767   18653 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-964416</name>
	  <uuid>198921e3-3bb9-4b45-9dea-69ff479a7843</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/addons-964416.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:e8:75:8f'/>
	      <source network='mk-addons-964416'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:bd:1f:e4'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1123 08:10:59.200242   18653 main.go:143] libmachine: waiting for domain to start...
	I1123 08:10:59.201336   18653 main.go:143] libmachine: domain is now running
	I1123 08:10:59.201352   18653 main.go:143] libmachine: waiting for IP...
	I1123 08:10:59.202050   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:10:59.202419   18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
	I1123 08:10:59.202432   18653 main.go:143] libmachine: trying to list again with source=arp
	I1123 08:10:59.202663   18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
	I1123 08:10:59.202715   18653 retry.go:31] will retry after 205.989952ms: waiting for domain to come up
	I1123 08:10:59.410172   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:10:59.410663   18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
	I1123 08:10:59.410677   18653 main.go:143] libmachine: trying to list again with source=arp
	I1123 08:10:59.410917   18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
	I1123 08:10:59.410965   18653 retry.go:31] will retry after 267.84973ms: waiting for domain to come up
	I1123 08:10:59.680513   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:10:59.680952   18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
	I1123 08:10:59.680966   18653 main.go:143] libmachine: trying to list again with source=arp
	I1123 08:10:59.681180   18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
	I1123 08:10:59.681206   18653 retry.go:31] will retry after 477.98669ms: waiting for domain to come up
	I1123 08:11:00.160923   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:00.161450   18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
	I1123 08:11:00.161481   18653 main.go:143] libmachine: trying to list again with source=arp
	I1123 08:11:00.161775   18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
	I1123 08:11:00.161808   18653 retry.go:31] will retry after 471.610526ms: waiting for domain to come up
	I1123 08:11:00.635573   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:00.636080   18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
	I1123 08:11:00.636095   18653 main.go:143] libmachine: trying to list again with source=arp
	I1123 08:11:00.636344   18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
	I1123 08:11:00.636385   18653 retry.go:31] will retry after 542.4133ms: waiting for domain to come up
	I1123 08:11:01.180105   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:01.180624   18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
	I1123 08:11:01.180642   18653 main.go:143] libmachine: trying to list again with source=arp
	I1123 08:11:01.180952   18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
	I1123 08:11:01.180989   18653 retry.go:31] will retry after 703.526723ms: waiting for domain to come up
	I1123 08:11:01.885695   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:01.886173   18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
	I1123 08:11:01.886186   18653 main.go:143] libmachine: trying to list again with source=arp
	I1123 08:11:01.886454   18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
	I1123 08:11:01.886506   18653 retry.go:31] will retry after 909.542016ms: waiting for domain to come up
	I1123 08:11:02.797278   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:02.797806   18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
	I1123 08:11:02.797824   18653 main.go:143] libmachine: trying to list again with source=arp
	I1123 08:11:02.798072   18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
	I1123 08:11:02.798105   18653 retry.go:31] will retry after 1.192874427s: waiting for domain to come up
	I1123 08:11:03.992911   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:03.993501   18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
	I1123 08:11:03.993520   18653 main.go:143] libmachine: trying to list again with source=arp
	I1123 08:11:03.993793   18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
	I1123 08:11:03.993827   18653 retry.go:31] will retry after 1.248389295s: waiting for domain to come up
	I1123 08:11:05.244214   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:05.244760   18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
	I1123 08:11:05.244777   18653 main.go:143] libmachine: trying to list again with source=arp
	I1123 08:11:05.245052   18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
	I1123 08:11:05.245084   18653 retry.go:31] will retry after 1.651266277s: waiting for domain to come up
	I1123 08:11:06.898820   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:06.899378   18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
	I1123 08:11:06.899390   18653 main.go:143] libmachine: trying to list again with source=arp
	I1123 08:11:06.899705   18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
	I1123 08:11:06.899727   18653 retry.go:31] will retry after 2.501950947s: waiting for domain to come up
	I1123 08:11:09.403560   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:09.404138   18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
	I1123 08:11:09.404156   18653 main.go:143] libmachine: trying to list again with source=arp
	I1123 08:11:09.404482   18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
	I1123 08:11:09.404524   18653 retry.go:31] will retry after 2.547751799s: waiting for domain to come up
	I1123 08:11:11.953413   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:11.953888   18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
	I1123 08:11:11.953900   18653 main.go:143] libmachine: trying to list again with source=arp
	I1123 08:11:11.954167   18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
	I1123 08:11:11.954191   18653 retry.go:31] will retry after 3.765225681s: waiting for domain to come up
	I1123 08:11:15.722527   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:15.723057   18653 main.go:143] libmachine: domain addons-964416 has current primary IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:15.723070   18653 main.go:143] libmachine: found domain IP: 192.168.39.198
	I1123 08:11:15.723076   18653 main.go:143] libmachine: reserving static IP address...
	I1123 08:11:15.723458   18653 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-964416", mac: "52:54:00:e8:75:8f", ip: "192.168.39.198"} in network mk-addons-964416
	I1123 08:11:15.897620   18653 main.go:143] libmachine: reserved static IP address 192.168.39.198 for domain addons-964416
	I1123 08:11:15.897647   18653 main.go:143] libmachine: waiting for SSH...
	I1123 08:11:15.897654   18653 main.go:143] libmachine: Getting to WaitForSSH function...
	I1123 08:11:15.900288   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:15.900789   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:15.900818   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:15.900981   18653 main.go:143] libmachine: Using SSH client type: native
	I1123 08:11:15.901180   18653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1123 08:11:15.901195   18653 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1123 08:11:16.014135   18653 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:11:16.014540   18653 main.go:143] libmachine: domain creation complete
	I1123 08:11:16.016018   18653 machine.go:94] provisionDockerMachine start ...
	I1123 08:11:16.018144   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:16.018554   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:16.018584   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:16.018747   18653 main.go:143] libmachine: Using SSH client type: native
	I1123 08:11:16.018954   18653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1123 08:11:16.018968   18653 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:11:16.130854   18653 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1123 08:11:16.130878   18653 buildroot.go:166] provisioning hostname "addons-964416"
	I1123 08:11:16.133669   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:16.134073   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:16.134094   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:16.134246   18653 main.go:143] libmachine: Using SSH client type: native
	I1123 08:11:16.134452   18653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1123 08:11:16.134478   18653 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-964416 && echo "addons-964416" | sudo tee /etc/hostname
	I1123 08:11:16.265167   18653 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-964416
	
	I1123 08:11:16.267795   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:16.268099   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:16.268118   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:16.268296   18653 main.go:143] libmachine: Using SSH client type: native
	I1123 08:11:16.268503   18653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1123 08:11:16.268518   18653 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-964416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-964416/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-964416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:11:16.392545   18653 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:11:16.392576   18653 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21969-14048/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-14048/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-14048/.minikube}
	I1123 08:11:16.392612   18653 buildroot.go:174] setting up certificates
	I1123 08:11:16.392627   18653 provision.go:84] configureAuth start
	I1123 08:11:16.395130   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:16.395494   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:16.395520   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:16.397512   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:16.397787   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:16.397810   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:16.397940   18653 provision.go:143] copyHostCerts
	I1123 08:11:16.398013   18653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-14048/.minikube/ca.pem (1082 bytes)
	I1123 08:11:16.398124   18653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-14048/.minikube/cert.pem (1123 bytes)
	I1123 08:11:16.398207   18653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-14048/.minikube/key.pem (1675 bytes)
	I1123 08:11:16.398289   18653 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-14048/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca-key.pem org=jenkins.addons-964416 san=[127.0.0.1 192.168.39.198 addons-964416 localhost minikube]
	I1123 08:11:16.483278   18653 provision.go:177] copyRemoteCerts
	I1123 08:11:16.483341   18653 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:11:16.485737   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:16.486095   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:16.486134   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:16.486267   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:16.573503   18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 08:11:16.602745   18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 08:11:16.630774   18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:11:16.659644   18653 provision.go:87] duration metric: took 267.000965ms to configureAuth
	I1123 08:11:16.659677   18653 buildroot.go:189] setting minikube options for container-runtime
	I1123 08:11:16.659913   18653 config.go:182] Loaded profile config "addons-964416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:11:16.662198   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:16.662572   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:16.662602   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:16.662790   18653 main.go:143] libmachine: Using SSH client type: native
	I1123 08:11:16.662977   18653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1123 08:11:16.662991   18653 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:11:16.916158   18653 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:11:16.916190   18653 machine.go:97] duration metric: took 900.154416ms to provisionDockerMachine
	I1123 08:11:16.916204   18653 client.go:176] duration metric: took 18.829290568s to LocalClient.Create
	I1123 08:11:16.916227   18653 start.go:167] duration metric: took 18.829349595s to libmachine.API.Create "addons-964416"
	I1123 08:11:16.916238   18653 start.go:293] postStartSetup for "addons-964416" (driver="kvm2")
	I1123 08:11:16.916255   18653 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:11:16.916354   18653 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:11:16.918849   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:16.919244   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:16.919265   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:16.919377   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:17.007648   18653 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:11:17.012596   18653 info.go:137] Remote host: Buildroot 2025.02
	I1123 08:11:17.012618   18653 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-14048/.minikube/addons for local assets ...
	I1123 08:11:17.012668   18653 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-14048/.minikube/files for local assets ...
	I1123 08:11:17.012692   18653 start.go:296] duration metric: took 96.443453ms for postStartSetup
	I1123 08:11:17.047765   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:17.048060   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:17.048079   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:17.048253   18653 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/config.json ...
	I1123 08:11:17.048427   18653 start.go:128] duration metric: took 18.963529863s to createHost
	I1123 08:11:17.050836   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:17.051661   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:17.051693   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:17.051888   18653 main.go:143] libmachine: Using SSH client type: native
	I1123 08:11:17.052098   18653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1123 08:11:17.052111   18653 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1123 08:11:17.165872   18653 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763885477.130700134
	
	I1123 08:11:17.165896   18653 fix.go:216] guest clock: 1763885477.130700134
	I1123 08:11:17.165903   18653 fix.go:229] Guest: 2025-11-23 08:11:17.130700134 +0000 UTC Remote: 2025-11-23 08:11:17.048438717 +0000 UTC m=+19.056022171 (delta=82.261417ms)
	I1123 08:11:17.165919   18653 fix.go:200] guest clock delta is within tolerance: 82.261417ms
	I1123 08:11:17.165924   18653 start.go:83] releasing machines lock for "addons-964416", held for 19.081095343s
	I1123 08:11:17.168830   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:17.169234   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:17.169256   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:17.169808   18653 ssh_runner.go:195] Run: cat /version.json
	I1123 08:11:17.169904   18653 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:11:17.172843   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:17.172885   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:17.173244   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:17.173264   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:17.173311   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:17.173342   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:17.173418   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:17.173644   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:17.280421   18653 ssh_runner.go:195] Run: systemctl --version
	I1123 08:11:17.286923   18653 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:11:17.443209   18653 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:11:17.450509   18653 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:11:17.450575   18653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:11:17.470583   18653 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 08:11:17.470610   18653 start.go:496] detecting cgroup driver to use...
	I1123 08:11:17.470673   18653 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:11:17.488970   18653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:11:17.505149   18653 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:11:17.505201   18653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:11:17.522232   18653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:11:17.538429   18653 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:11:17.681162   18653 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:11:17.886230   18653 docker.go:234] disabling docker service ...
	I1123 08:11:17.886312   18653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:11:17.902807   18653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:11:17.917113   18653 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:11:18.073262   18653 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:11:18.213337   18653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:11:18.228778   18653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:11:18.252090   18653 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:11:18.252154   18653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:11:18.264270   18653 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 08:11:18.264350   18653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:11:18.276544   18653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:11:18.288927   18653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:11:18.301013   18653 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:11:18.313584   18653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:11:18.325701   18653 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:11:18.345650   18653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:11:18.357887   18653 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:11:18.367953   18653 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1123 08:11:18.367991   18653 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1123 08:11:18.390599   18653 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:11:18.404998   18653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:11:18.548395   18653 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:11:18.660875   18653 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:11:18.660971   18653 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:11:18.666411   18653 start.go:564] Will wait 60s for crictl version
	I1123 08:11:18.666484   18653 ssh_runner.go:195] Run: which crictl
	I1123 08:11:18.670714   18653 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1123 08:11:18.709018   18653 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1123 08:11:18.709134   18653 ssh_runner.go:195] Run: crio --version
	I1123 08:11:18.738295   18653 ssh_runner.go:195] Run: crio --version
	I1123 08:11:18.769287   18653 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1123 08:11:18.772706   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:18.773150   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:18.773173   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:18.773395   18653 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1123 08:11:18.778010   18653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:11:18.793347   18653 kubeadm.go:884] updating cluster {Name:addons-964416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-964416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:11:18.793522   18653 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:11:18.793570   18653 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:11:18.823817   18653 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1123 08:11:18.823886   18653 ssh_runner.go:195] Run: which lz4
	I1123 08:11:18.828243   18653 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1123 08:11:18.832970   18653 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1123 08:11:18.833001   18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1123 08:11:20.277620   18653 crio.go:462] duration metric: took 1.449416073s to copy over tarball
	I1123 08:11:20.277695   18653 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1123 08:11:21.895625   18653 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.617907972s)
	I1123 08:11:21.895650   18653 crio.go:469] duration metric: took 1.618002394s to extract the tarball
	I1123 08:11:21.895657   18653 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1123 08:11:21.936673   18653 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:11:21.979082   18653 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:11:21.979107   18653 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:11:21.979116   18653 kubeadm.go:935] updating node { 192.168.39.198 8443 v1.34.1 crio true true} ...
	I1123 08:11:21.979206   18653 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-964416 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-964416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:11:21.979289   18653 ssh_runner.go:195] Run: crio config
	I1123 08:11:22.025180   18653 cni.go:84] Creating CNI manager for ""
	I1123 08:11:22.025211   18653 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1123 08:11:22.025231   18653 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:11:22.025253   18653 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.198 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-964416 NodeName:addons-964416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:11:22.025364   18653 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-964416"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.198"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.198"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:11:22.025449   18653 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:11:22.037734   18653 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:11:22.037805   18653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:11:22.049522   18653 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1123 08:11:22.070033   18653 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:11:22.090976   18653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1123 08:11:22.111193   18653 ssh_runner.go:195] Run: grep 192.168.39.198	control-plane.minikube.internal$ /etc/hosts
	I1123 08:11:22.115527   18653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.198	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:11:22.130414   18653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:11:22.270003   18653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:11:22.291583   18653 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416 for IP: 192.168.39.198
	I1123 08:11:22.291611   18653 certs.go:195] generating shared ca certs ...
	I1123 08:11:22.291630   18653 certs.go:227] acquiring lock for ca certs: {Name:mkaeb9dc4e066e858e41c686c8e5e48e63a99316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:11:22.291792   18653 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-14048/.minikube/ca.key
	I1123 08:11:22.347850   18653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-14048/.minikube/ca.crt ...
	I1123 08:11:22.347878   18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/ca.crt: {Name:mk20cfbbe0e260e30b971f49e8bd6543e0947bb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:11:22.348038   18653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-14048/.minikube/ca.key ...
	I1123 08:11:22.348050   18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/ca.key: {Name:mkfe70366891274ede47b02e24442af5d9af5d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:11:22.348123   18653 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-14048/.minikube/proxy-client-ca.key
	I1123 08:11:22.386591   18653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-14048/.minikube/proxy-client-ca.crt ...
	I1123 08:11:22.386614   18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/proxy-client-ca.crt: {Name:mk2ee3c3942cc0dc5ef41beb046bb819150fd46c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:11:22.386750   18653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-14048/.minikube/proxy-client-ca.key ...
	I1123 08:11:22.386761   18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/proxy-client-ca.key: {Name:mk4c03b1697eedd6395db853d5b6d9005823b710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:11:22.386827   18653 certs.go:257] generating profile certs ...
	I1123 08:11:22.386880   18653 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.key
	I1123 08:11:22.386895   18653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt with IP's: []
	I1123 08:11:22.417398   18653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt ...
	I1123 08:11:22.417423   18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: {Name:mk4910134fd4bedd14eed21e7416eb0cf90b1a02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:11:22.417575   18653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.key ...
	I1123 08:11:22.417587   18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.key: {Name:mk0ec0189c24fb5bd4b3c1ce690a2cbadff79af1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:11:22.417656   18653 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.key.b74a7a8c
	I1123 08:11:22.417673   18653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.crt.b74a7a8c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198]
	I1123 08:11:22.591814   18653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.crt.b74a7a8c ...
	I1123 08:11:22.591843   18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.crt.b74a7a8c: {Name:mkdb5363ba8b730bdb44382a62f62248c73d959d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:11:22.592001   18653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.key.b74a7a8c ...
	I1123 08:11:22.592015   18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.key.b74a7a8c: {Name:mk6d8e84fba8b4506b05c6b5a5a0a33ed018c927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:11:22.592095   18653 certs.go:382] copying /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.crt.b74a7a8c -> /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.crt
	I1123 08:11:22.592165   18653 certs.go:386] copying /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.key.b74a7a8c -> /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.key
	I1123 08:11:22.592214   18653 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/proxy-client.key
	I1123 08:11:22.592232   18653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/proxy-client.crt with IP's: []
	I1123 08:11:22.774363   18653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/proxy-client.crt ...
	I1123 08:11:22.774392   18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/proxy-client.crt: {Name:mke095bca8a3bbdaedaf5ec07eec71ca6e778658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:11:22.775053   18653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/proxy-client.key ...
	I1123 08:11:22.775069   18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/proxy-client.key: {Name:mk5bb54bfe5bf2177917ffdfe7c8501a7453f143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:11:22.775260   18653 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:11:22.775297   18653 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca.pem (1082 bytes)
	I1123 08:11:22.775324   18653 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:11:22.775348   18653 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/key.pem (1675 bytes)
	I1123 08:11:22.775928   18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:11:22.808393   18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:11:22.839305   18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:11:22.869871   18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:11:22.899789   18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 08:11:22.928927   18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:11:22.958482   18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:11:22.990572   18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:11:23.019517   18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:11:23.054978   18653 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:11:23.075796   18653 ssh_runner.go:195] Run: openssl version
	I1123 08:11:23.082308   18653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:11:23.095637   18653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:11:23.101054   18653 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:11 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:11:23.101114   18653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:11:23.108731   18653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:11:23.121598   18653 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:11:23.126928   18653 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:11:23.126978   18653 kubeadm.go:401] StartCluster: {Name:addons-964416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-964416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:11:23.127053   18653 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:11:23.127101   18653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:11:23.162543   18653 cri.go:89] found id: ""
	I1123 08:11:23.162603   18653 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:11:23.174594   18653 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:11:23.186086   18653 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:11:23.197490   18653 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:11:23.197508   18653 kubeadm.go:158] found existing configuration files:
	
	I1123 08:11:23.197551   18653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:11:23.208166   18653 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:11:23.208231   18653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:11:23.219522   18653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:11:23.230902   18653 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:11:23.230966   18653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:11:23.242496   18653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:11:23.253773   18653 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:11:23.253830   18653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:11:23.265493   18653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:11:23.276712   18653 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:11:23.276781   18653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:11:23.288572   18653 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1123 08:11:23.436210   18653 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:11:35.904565   18653 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:11:35.904632   18653 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:11:35.904719   18653 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:11:35.904841   18653 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:11:35.904925   18653 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:11:35.904980   18653 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:11:35.906967   18653 out.go:252]   - Generating certificates and keys ...
	I1123 08:11:35.907067   18653 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:11:35.907168   18653 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:11:35.907271   18653 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:11:35.907372   18653 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:11:35.907451   18653 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:11:35.907523   18653 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:11:35.907609   18653 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:11:35.907749   18653 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-964416 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I1123 08:11:35.907825   18653 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:11:35.907938   18653 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-964416 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I1123 08:11:35.908010   18653 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:11:35.908073   18653 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:11:35.908117   18653 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:11:35.908170   18653 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:11:35.908237   18653 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:11:35.908297   18653 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:11:35.908343   18653 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:11:35.908409   18653 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:11:35.908454   18653 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:11:35.908561   18653 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:11:35.908647   18653 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:11:35.909666   18653 out.go:252]   - Booting up control plane ...
	I1123 08:11:35.909752   18653 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:11:35.909837   18653 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:11:35.909926   18653 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:11:35.910079   18653 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:11:35.910164   18653 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:11:35.910249   18653 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:11:35.910336   18653 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:11:35.910384   18653 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:11:35.910564   18653 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:11:35.910705   18653 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:11:35.910755   18653 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001955279s
	I1123 08:11:35.910827   18653 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:11:35.910932   18653 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.198:8443/livez
	I1123 08:11:35.911004   18653 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:11:35.911070   18653 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:11:35.911143   18653 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.751500362s
	I1123 08:11:35.911210   18653 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.401515098s
	I1123 08:11:35.911308   18653 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501593201s
	I1123 08:11:35.911401   18653 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:11:35.911546   18653 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:11:35.911625   18653 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:11:35.911850   18653 kubeadm.go:319] [mark-control-plane] Marking the node addons-964416 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:11:35.911934   18653 kubeadm.go:319] [bootstrap-token] Using token: qbvgpa.gdtv5a1xhu29o3p0
	I1123 08:11:35.913781   18653 out.go:252]   - Configuring RBAC rules ...
	I1123 08:11:35.913871   18653 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:11:35.913943   18653 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:11:35.914099   18653 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:11:35.914273   18653 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:11:35.914444   18653 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:11:35.914583   18653 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:11:35.914722   18653 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:11:35.914791   18653 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:11:35.914863   18653 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:11:35.914871   18653 kubeadm.go:319] 
	I1123 08:11:35.914964   18653 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:11:35.914978   18653 kubeadm.go:319] 
	I1123 08:11:35.915061   18653 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:11:35.915070   18653 kubeadm.go:319] 
	I1123 08:11:35.915104   18653 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:11:35.915183   18653 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:11:35.915263   18653 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:11:35.915274   18653 kubeadm.go:319] 
	I1123 08:11:35.915320   18653 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:11:35.915328   18653 kubeadm.go:319] 
	I1123 08:11:35.915394   18653 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:11:35.915403   18653 kubeadm.go:319] 
	I1123 08:11:35.915493   18653 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:11:35.915618   18653 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:11:35.915724   18653 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:11:35.915737   18653 kubeadm.go:319] 
	I1123 08:11:35.915864   18653 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:11:35.915939   18653 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:11:35.915945   18653 kubeadm.go:319] 
	I1123 08:11:35.916021   18653 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qbvgpa.gdtv5a1xhu29o3p0 \
	I1123 08:11:35.916117   18653 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b6edc1ca7c90bf9718138496669098f2f79ed1548b9ca908b39b661d6f737e61 \
	I1123 08:11:35.916142   18653 kubeadm.go:319] 	--control-plane 
	I1123 08:11:35.916146   18653 kubeadm.go:319] 
	I1123 08:11:35.916220   18653 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:11:35.916226   18653 kubeadm.go:319] 
	I1123 08:11:35.916288   18653 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qbvgpa.gdtv5a1xhu29o3p0 \
	I1123 08:11:35.916392   18653 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b6edc1ca7c90bf9718138496669098f2f79ed1548b9ca908b39b661d6f737e61 
	I1123 08:11:35.916403   18653 cni.go:84] Creating CNI manager for ""
	I1123 08:11:35.916410   18653 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1123 08:11:35.917857   18653 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1123 08:11:35.918962   18653 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1123 08:11:35.932867   18653 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1123 08:11:35.959128   18653 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:11:35.959232   18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:11:35.959246   18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-964416 minikube.k8s.io/updated_at=2025_11_23T08_11_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=addons-964416 minikube.k8s.io/primary=true
	I1123 08:11:36.011606   18653 ops.go:34] apiserver oom_adj: -16
	I1123 08:11:36.082160   18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:11:36.583160   18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:11:37.083151   18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:11:37.582209   18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:11:38.082605   18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:11:38.582790   18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:11:39.082308   18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:11:39.582221   18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:11:40.082598   18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:11:40.582415   18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:11:41.082790   18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:11:41.181687   18653 kubeadm.go:1114] duration metric: took 5.222519084s to wait for elevateKubeSystemPrivileges
	I1123 08:11:41.181732   18653 kubeadm.go:403] duration metric: took 18.054758087s to StartCluster
	I1123 08:11:41.181756   18653 settings.go:142] acquiring lock: {Name:mkab6903339ca646213aa209a9d09b91734329a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:11:41.181918   18653 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-14048/kubeconfig
	I1123 08:11:41.182457   18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/kubeconfig: {Name:mk15e2740703c77f3808fd0888f2d0465004dca1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:11:41.182725   18653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:11:41.182746   18653 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:11:41.182816   18653 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1123 08:11:41.182933   18653 addons.go:70] Setting yakd=true in profile "addons-964416"
	I1123 08:11:41.182948   18653 addons.go:70] Setting inspektor-gadget=true in profile "addons-964416"
	I1123 08:11:41.182959   18653 addons.go:239] Setting addon yakd=true in "addons-964416"
	I1123 08:11:41.182960   18653 addons.go:239] Setting addon inspektor-gadget=true in "addons-964416"
	I1123 08:11:41.182986   18653 host.go:66] Checking if "addons-964416" exists ...
	I1123 08:11:41.182991   18653 addons.go:70] Setting cloud-spanner=true in profile "addons-964416"
	I1123 08:11:41.183016   18653 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-964416"
	I1123 08:11:41.183026   18653 addons.go:70] Setting volcano=true in profile "addons-964416"
	I1123 08:11:41.183029   18653 addons.go:239] Setting addon cloud-spanner=true in "addons-964416"
	I1123 08:11:41.183035   18653 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-964416"
	I1123 08:11:41.183045   18653 addons.go:70] Setting volumesnapshots=true in profile "addons-964416"
	I1123 08:11:41.183051   18653 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-964416"
	I1123 08:11:41.183057   18653 addons.go:239] Setting addon volumesnapshots=true in "addons-964416"
	I1123 08:11:41.183064   18653 host.go:66] Checking if "addons-964416" exists ...
	I1123 08:11:41.183073   18653 host.go:66] Checking if "addons-964416" exists ...
	I1123 08:11:41.183075   18653 host.go:66] Checking if "addons-964416" exists ...
	I1123 08:11:41.183074   18653 addons.go:70] Setting registry=true in profile "addons-964416"
	I1123 08:11:41.183089   18653 addons.go:239] Setting addon registry=true in "addons-964416"
	I1123 08:11:41.183119   18653 host.go:66] Checking if "addons-964416" exists ...
	I1123 08:11:41.183251   18653 addons.go:70] Setting ingress=true in profile "addons-964416"
	I1123 08:11:41.183276   18653 addons.go:239] Setting addon ingress=true in "addons-964416"
	I1123 08:11:41.183310   18653 host.go:66] Checking if "addons-964416" exists ...
	I1123 08:11:41.183598   18653 addons.go:70] Setting gcp-auth=true in profile "addons-964416"
	I1123 08:11:41.183623   18653 mustload.go:66] Loading cluster: addons-964416
	I1123 08:11:41.183781   18653 config.go:182] Loaded profile config "addons-964416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:11:41.183824   18653 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-964416"
	I1123 08:11:41.183845   18653 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-964416"
	I1123 08:11:41.183867   18653 host.go:66] Checking if "addons-964416" exists ...
	I1123 08:11:41.183930   18653 addons.go:70] Setting ingress-dns=true in profile "addons-964416"
	I1123 08:11:41.183949   18653 addons.go:239] Setting addon ingress-dns=true in "addons-964416"
	I1123 08:11:41.183974   18653 host.go:66] Checking if "addons-964416" exists ...
	I1123 08:11:41.183017   18653 addons.go:70] Setting metrics-server=true in profile "addons-964416"
	I1123 08:11:41.184127   18653 addons.go:239] Setting addon metrics-server=true in "addons-964416"
	I1123 08:11:41.183037   18653 addons.go:239] Setting addon volcano=true in "addons-964416"
	I1123 08:11:41.184227   18653 host.go:66] Checking if "addons-964416" exists ...
	I1123 08:11:41.184242   18653 host.go:66] Checking if "addons-964416" exists ...
	I1123 08:11:41.183000   18653 host.go:66] Checking if "addons-964416" exists ...
	I1123 08:11:41.182993   18653 addons.go:70] Setting default-storageclass=true in profile "addons-964416"
	I1123 08:11:41.184844   18653 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-964416"
	I1123 08:11:41.183065   18653 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-964416"
	I1123 08:11:41.185088   18653 host.go:66] Checking if "addons-964416" exists ...
	I1123 08:11:41.182936   18653 config.go:182] Loaded profile config "addons-964416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:11:41.183009   18653 addons.go:70] Setting storage-provisioner=true in profile "addons-964416"
	I1123 08:11:41.183019   18653 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-964416"
	I1123 08:11:41.185258   18653 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-964416"
	I1123 08:11:41.185237   18653 addons.go:239] Setting addon storage-provisioner=true in "addons-964416"
	I1123 08:11:41.185378   18653 host.go:66] Checking if "addons-964416" exists ...
	I1123 08:11:41.182998   18653 addons.go:70] Setting registry-creds=true in profile "addons-964416"
	I1123 08:11:41.185497   18653 addons.go:239] Setting addon registry-creds=true in "addons-964416"
	I1123 08:11:41.185524   18653 host.go:66] Checking if "addons-964416" exists ...
	I1123 08:11:41.186205   18653 out.go:179] * Verifying Kubernetes components...
	I1123 08:11:41.187859   18653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:11:41.190339   18653 host.go:66] Checking if "addons-964416" exists ...
	W1123 08:11:41.191938   18653 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1123 08:11:41.192936   18653 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1123 08:11:41.192997   18653 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1123 08:11:41.193019   18653 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1123 08:11:41.193435   18653 addons.go:239] Setting addon default-storageclass=true in "addons-964416"
	I1123 08:11:41.193763   18653 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-964416"
	I1123 08:11:41.193785   18653 host.go:66] Checking if "addons-964416" exists ...
	I1123 08:11:41.193793   18653 host.go:66] Checking if "addons-964416" exists ...
	I1123 08:11:41.193185   18653 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1123 08:11:41.193025   18653 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1123 08:11:41.193816   18653 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1123 08:11:41.193826   18653 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1123 08:11:41.193843   18653 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1123 08:11:41.193848   18653 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1123 08:11:41.193853   18653 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1123 08:11:41.194556   18653 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1123 08:11:41.194579   18653 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1123 08:11:41.195028   18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1123 08:11:41.194583   18653 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1123 08:11:41.195097   18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1123 08:11:41.195717   18653 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:11:41.195742   18653 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:11:41.195945   18653 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1123 08:11:41.195962   18653 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1123 08:11:41.196055   18653 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 08:11:41.196059   18653 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 08:11:41.196069   18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1123 08:11:41.196071   18653 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1123 08:11:41.196074   18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1123 08:11:41.196078   18653 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1123 08:11:41.196097   18653 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 08:11:41.196107   18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1123 08:11:41.196120   18653 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:11:41.196021   18653 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1123 08:11:41.196122   18653 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 08:11:41.196142   18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1123 08:11:41.196187   18653 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 08:11:41.196195   18653 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 08:11:41.197380   18653 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 08:11:41.198006   18653 out.go:179]   - Using image docker.io/registry:3.0.0
	I1123 08:11:41.198032   18653 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:11:41.198043   18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:11:41.198009   18653 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1123 08:11:41.198824   18653 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1123 08:11:41.199704   18653 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1123 08:11:41.199718   18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1123 08:11:41.200272   18653 out.go:179]   - Using image docker.io/busybox:stable
	I1123 08:11:41.200325   18653 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 08:11:41.201378   18653 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1123 08:11:41.201447   18653 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 08:11:41.201458   18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1123 08:11:41.201595   18653 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 08:11:41.201612   18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1123 08:11:41.203637   18653 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1123 08:11:41.204685   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.205026   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.205977   18653 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1123 08:11:41.206214   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:41.206253   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.206290   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.207066   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:41.207104   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.207386   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.207787   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:41.208127   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:41.208720   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.208760   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:41.208788   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.208924   18653 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1123 08:11:41.208963   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:41.209032   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.209221   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.209485   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:41.209702   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:41.210022   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.210174   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:41.210212   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.210309   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.210335   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.210497   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.210775   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:41.210779   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:41.210828   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.210848   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.211091   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:41.211120   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.211290   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:41.211631   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:41.211681   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:41.211703   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.211767   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:41.211800   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.211927   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:41.211976   18653 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1123 08:11:41.212017   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:41.212090   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:41.212122   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.212129   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.212169   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.212405   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:41.212611   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:41.212726   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:41.212904   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.213135   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.213142   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:41.213170   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.213400   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:41.213445   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:41.213490   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.213675   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:41.213686   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:41.213701   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.213844   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:41.214513   18653 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1123 08:11:41.215873   18653 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1123 08:11:41.215894   18653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1123 08:11:41.218528   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.218915   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:41.218945   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:41.219124   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:42.171767   18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 08:11:42.173863   18653 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1123 08:11:42.173881   18653 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1123 08:11:42.174751   18653 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1123 08:11:42.174765   18653 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1123 08:11:42.178885   18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1123 08:11:42.203558   18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 08:11:42.208448   18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1123 08:11:42.293753   18653 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1123 08:11:42.293808   18653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1123 08:11:42.296069   18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:11:42.302164   18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:11:42.311267   18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 08:11:42.347681   18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 08:11:42.362584   18653 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 08:11:42.362602   18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1123 08:11:42.379957   18653 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1123 08:11:42.379977   18653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1123 08:11:42.391990   18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 08:11:42.440101   18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 08:11:42.527359   18653 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1123 08:11:42.527383   18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1123 08:11:42.638012   18653 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1123 08:11:42.638033   18653 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1123 08:11:42.866580   18653 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1123 08:11:42.866609   18653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1123 08:11:42.875338   18653 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.692581221s)
	I1123 08:11:42.875393   18653 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.68751022s)
	I1123 08:11:42.875454   18653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:11:42.875522   18653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:11:43.103109   18653 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 08:11:43.103132   18653 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 08:11:43.135364   18653 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1123 08:11:43.135391   18653 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1123 08:11:43.151890   18653 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1123 08:11:43.151911   18653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1123 08:11:43.160393   18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1123 08:11:43.404985   18653 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1123 08:11:43.405010   18653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1123 08:11:43.489176   18653 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1123 08:11:43.489197   18653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1123 08:11:43.532283   18653 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 08:11:43.532309   18653 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 08:11:43.564321   18653 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1123 08:11:43.564854   18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1123 08:11:43.772847   18653 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1123 08:11:43.772887   18653 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1123 08:11:43.817650   18653 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1123 08:11:43.817675   18653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1123 08:11:43.904825   18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 08:11:43.946754   18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1123 08:11:44.140771   18653 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1123 08:11:44.140809   18653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1123 08:11:44.180843   18653 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 08:11:44.180865   18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1123 08:11:44.505533   18653 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1123 08:11:44.505556   18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1123 08:11:44.732417   18653 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1123 08:11:44.732443   18653 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1123 08:11:44.804597   18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 08:11:45.227649   18653 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1123 08:11:45.227673   18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1123 08:11:45.545435   18653 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1123 08:11:45.545455   18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1123 08:11:46.086124   18653 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 08:11:46.086158   18653 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1123 08:11:46.673237   18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 08:11:47.655655   18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.483848839s)
	I1123 08:11:47.655718   18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.476811802s)
	I1123 08:11:47.655762   18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.452186594s)
	I1123 08:11:47.655836   18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.447370266s)
	I1123 08:11:48.115611   18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.813420647s)
	I1123 08:11:48.115733   18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.804431054s)
	I1123 08:11:48.115799   18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.768082025s)
	I1123 08:11:48.116056   18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.81996011s)
	I1123 08:11:48.640854   18653 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1123 08:11:48.643519   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:48.643875   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:48.643896   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:48.644048   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:48.854435   18653 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1123 08:11:48.924433   18653 addons.go:239] Setting addon gcp-auth=true in "addons-964416"
	I1123 08:11:48.924488   18653 host.go:66] Checking if "addons-964416" exists ...
	I1123 08:11:48.926175   18653 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1123 08:11:48.928235   18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:48.928587   18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
	I1123 08:11:48.928608   18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
	I1123 08:11:48.928737   18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
	I1123 08:11:50.270328   18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.878309434s)
	I1123 08:11:50.270357   18653 addons.go:495] Verifying addon ingress=true in "addons-964416"
	I1123 08:11:50.270478   18653 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.394914797s)
	I1123 08:11:50.270508   18653 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.395016637s)
	I1123 08:11:50.270510   18653 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1123 08:11:50.270573   18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.110151486s)
	I1123 08:11:50.270417   18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.830283949s)
	I1123 08:11:50.270605   18653 addons.go:495] Verifying addon registry=true in "addons-964416"
	I1123 08:11:50.270722   18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.323930377s)
	I1123 08:11:50.270666   18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.365806733s)
	I1123 08:11:50.271647   18653 addons.go:495] Verifying addon metrics-server=true in "addons-964416"
	I1123 08:11:50.271248   18653 node_ready.go:35] waiting up to 6m0s for node "addons-964416" to be "Ready" ...
	I1123 08:11:50.272038   18653 out.go:179] * Verifying ingress addon...
	I1123 08:11:50.272049   18653 out.go:179] * Verifying registry addon...
	I1123 08:11:50.272921   18653 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-964416 service yakd-dashboard -n yakd-dashboard
	
	I1123 08:11:50.274526   18653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1123 08:11:50.274575   18653 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1123 08:11:50.327736   18653 node_ready.go:49] node "addons-964416" is "Ready"
	I1123 08:11:50.327771   18653 node_ready.go:38] duration metric: took 56.109166ms for node "addons-964416" to be "Ready" ...
	I1123 08:11:50.327786   18653 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:11:50.327846   18653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:11:50.327962   18653 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 08:11:50.327981   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:11:50.327998   18653 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1123 08:11:50.328011   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:11:50.807786   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:11:50.846655   18653 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-964416" context rescaled to 1 replicas
	I1123 08:11:50.846665   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:11:51.379871   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:11:51.384977   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:11:51.745150   18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.940509974s)
	W1123 08:11:51.745200   18653 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 08:11:51.745219   18653 retry.go:31] will retry after 179.005398ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 08:11:51.745347   18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.072075146s)
	I1123 08:11:51.745384   18653 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.81918681s)
	I1123 08:11:51.745420   18653 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.41755789s)
	I1123 08:11:51.745442   18653 api_server.go:72] duration metric: took 10.562664233s to wait for apiserver process to appear ...
	I1123 08:11:51.745453   18653 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:11:51.745558   18653 api_server.go:253] Checking apiserver healthz at https://192.168.39.198:8443/healthz ...
	I1123 08:11:51.745386   18653 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-964416"
	I1123 08:11:51.747292   18653 out.go:179] * Verifying csi-hostpath-driver addon...
	I1123 08:11:51.747320   18653 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1123 08:11:51.748624   18653 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 08:11:51.749054   18653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1123 08:11:51.749747   18653 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1123 08:11:51.749766   18653 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1123 08:11:51.794042   18653 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1123 08:11:51.794074   18653 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1123 08:11:51.807186   18653 api_server.go:279] https://192.168.39.198:8443/healthz returned 200:
	ok
	I1123 08:11:51.831599   18653 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 08:11:51.831621   18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1123 08:11:51.844490   18653 api_server.go:141] control plane version: v1.34.1
	I1123 08:11:51.844523   18653 api_server.go:131] duration metric: took 98.989325ms to wait for apiserver health ...
	I1123 08:11:51.844532   18653 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:11:51.858034   18653 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 08:11:51.858052   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:11:51.858277   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:11:51.858291   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:11:51.871224   18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 08:11:51.924365   18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 08:11:51.928546   18653 system_pods.go:59] 20 kube-system pods found
	I1123 08:11:51.928573   18653 system_pods.go:61] "amd-gpu-device-plugin-8vc9q" [8295884f-da88-49f2-9084-a9c8cfc1e4d9] Running
	I1123 08:11:51.928582   18653 system_pods.go:61] "coredns-66bc5c9577-69dqf" [34c766d9-fd50-4a3f-808a-a98aa625e61c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:11:51.928589   18653 system_pods.go:61] "coredns-66bc5c9577-gxw2m" [4c7ecbdf-e8c7-4ff9-9c2d-dc54c953605f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:11:51.928595   18653 system_pods.go:61] "csi-hostpath-attacher-0" [c5f1ff48-f68e-422e-83e7-eacc4a9dd794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 08:11:51.928600   18653 system_pods.go:61] "csi-hostpath-resizer-0" [7ad454a6-cccb-4992-90db-67818e21d079] Pending
	I1123 08:11:51.928607   18653 system_pods.go:61] "csi-hostpathplugin-vns9g" [28997eb1-283e-4d60-943c-9e31386ebc08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 08:11:51.928611   18653 system_pods.go:61] "etcd-addons-964416" [fdb49f48-84aa-4799-b97e-21ce92b79ddc] Running
	I1123 08:11:51.928614   18653 system_pods.go:61] "kube-apiserver-addons-964416" [75d01842-c68b-4a49-847c-58fbcf148fba] Running
	I1123 08:11:51.928618   18653 system_pods.go:61] "kube-controller-manager-addons-964416" [36599b1c-1da7-4b89-b7e7-baac03480cd7] Running
	I1123 08:11:51.928623   18653 system_pods.go:61] "kube-ingress-dns-minikube" [bc33e34a-7ac6-484c-b0a7-430085041ff4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:11:51.928626   18653 system_pods.go:61] "kube-proxy-cp69g" [3b6331ff-3dfb-46c8-b853-3ac13fdd22cc] Running
	I1123 08:11:51.928629   18653 system_pods.go:61] "kube-scheduler-addons-964416" [d5865b9d-d76a-46fe-ad59-9db3f56a22ac] Running
	I1123 08:11:51.928636   18653 system_pods.go:61] "metrics-server-85b7d694d7-bbw4l" [ca8af767-0eca-442a-abca-2fdfda492b61] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:11:51.928645   18653 system_pods.go:61] "nvidia-device-plugin-daemonset-n75x9" [8710964c-97c8-402e-9549-f6b1f4591c57] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 08:11:51.928651   18653 system_pods.go:61] "registry-6b586f9694-tgrtb" [462f4f44-75d7-422b-bb9c-ceb8be37562e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 08:11:51.928655   18653 system_pods.go:61] "registry-creds-764b6fb674-nrpjq" [b186300a-b391-46c2-8eee-26bb8cada6ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 08:11:51.928662   18653 system_pods.go:61] "registry-proxy-sn2cr" [aeb28b9e-fe74-4f9c-99cb-c02c966c626d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 08:11:51.928667   18653 system_pods.go:61] "snapshot-controller-7d9fbc56b8-d275s" [dc674dd5-6a4d-49d2-8119-79fa3fcc63ef] Pending
	I1123 08:11:51.928671   18653 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xsdh6" [6c82d59c-4d16-497b-8fa3-7184384d1ee5] Pending
	I1123 08:11:51.928675   18653 system_pods.go:61] "storage-provisioner" [fafc19a5-6c67-4faa-af77-b5dc63837928] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:11:51.928681   18653 system_pods.go:74] duration metric: took 84.143729ms to wait for pod list to return data ...
	I1123 08:11:51.928693   18653 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:11:51.975940   18653 default_sa.go:45] found service account: "default"
	I1123 08:11:51.975966   18653 default_sa.go:55] duration metric: took 47.266268ms for default service account to be created ...
	I1123 08:11:51.975979   18653 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:11:52.012876   18653 system_pods.go:86] 20 kube-system pods found
	I1123 08:11:52.012914   18653 system_pods.go:89] "amd-gpu-device-plugin-8vc9q" [8295884f-da88-49f2-9084-a9c8cfc1e4d9] Running
	I1123 08:11:52.012929   18653 system_pods.go:89] "coredns-66bc5c9577-69dqf" [34c766d9-fd50-4a3f-808a-a98aa625e61c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:11:52.012940   18653 system_pods.go:89] "coredns-66bc5c9577-gxw2m" [4c7ecbdf-e8c7-4ff9-9c2d-dc54c953605f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:11:52.012949   18653 system_pods.go:89] "csi-hostpath-attacher-0" [c5f1ff48-f68e-422e-83e7-eacc4a9dd794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 08:11:52.012959   18653 system_pods.go:89] "csi-hostpath-resizer-0" [7ad454a6-cccb-4992-90db-67818e21d079] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:11:52.012975   18653 system_pods.go:89] "csi-hostpathplugin-vns9g" [28997eb1-283e-4d60-943c-9e31386ebc08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 08:11:52.012986   18653 system_pods.go:89] "etcd-addons-964416" [fdb49f48-84aa-4799-b97e-21ce92b79ddc] Running
	I1123 08:11:52.012993   18653 system_pods.go:89] "kube-apiserver-addons-964416" [75d01842-c68b-4a49-847c-58fbcf148fba] Running
	I1123 08:11:52.012998   18653 system_pods.go:89] "kube-controller-manager-addons-964416" [36599b1c-1da7-4b89-b7e7-baac03480cd7] Running
	I1123 08:11:52.013009   18653 system_pods.go:89] "kube-ingress-dns-minikube" [bc33e34a-7ac6-484c-b0a7-430085041ff4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:11:52.013016   18653 system_pods.go:89] "kube-proxy-cp69g" [3b6331ff-3dfb-46c8-b853-3ac13fdd22cc] Running
	I1123 08:11:52.013024   18653 system_pods.go:89] "kube-scheduler-addons-964416" [d5865b9d-d76a-46fe-ad59-9db3f56a22ac] Running
	I1123 08:11:52.013033   18653 system_pods.go:89] "metrics-server-85b7d694d7-bbw4l" [ca8af767-0eca-442a-abca-2fdfda492b61] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:11:52.013048   18653 system_pods.go:89] "nvidia-device-plugin-daemonset-n75x9" [8710964c-97c8-402e-9549-f6b1f4591c57] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 08:11:52.013058   18653 system_pods.go:89] "registry-6b586f9694-tgrtb" [462f4f44-75d7-422b-bb9c-ceb8be37562e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 08:11:52.013067   18653 system_pods.go:89] "registry-creds-764b6fb674-nrpjq" [b186300a-b391-46c2-8eee-26bb8cada6ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 08:11:52.013080   18653 system_pods.go:89] "registry-proxy-sn2cr" [aeb28b9e-fe74-4f9c-99cb-c02c966c626d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 08:11:52.013089   18653 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d275s" [dc674dd5-6a4d-49d2-8119-79fa3fcc63ef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:11:52.013098   18653 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xsdh6" [6c82d59c-4d16-497b-8fa3-7184384d1ee5] Pending
	I1123 08:11:52.013108   18653 system_pods.go:89] "storage-provisioner" [fafc19a5-6c67-4faa-af77-b5dc63837928] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:11:52.013119   18653 system_pods.go:126] duration metric: took 37.132161ms to wait for k8s-apps to be running ...
	I1123 08:11:52.013135   18653 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:11:52.013190   18653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:11:52.263320   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:11:52.286656   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:11:52.287127   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:11:52.761198   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:11:52.779283   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:11:52.787667   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:11:53.286648   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:11:53.304290   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:11:53.304505   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:11:53.559194   18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.687935904s)
	I1123 08:11:53.560118   18653 addons.go:495] Verifying addon gcp-auth=true in "addons-964416"
	I1123 08:11:53.561532   18653 out.go:179] * Verifying gcp-auth addon...
	I1123 08:11:53.563773   18653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1123 08:11:53.655360   18653 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1123 08:11:53.655395   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:11:53.760533   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:11:53.806151   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:11:53.809686   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:11:54.072280   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:11:54.256052   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:11:54.281078   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:11:54.283869   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:11:54.329971   18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.405562218s)
	I1123 08:11:54.330011   18653 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.316794277s)
	I1123 08:11:54.330041   18653 system_svc.go:56] duration metric: took 2.316904612s WaitForService to wait for kubelet
	I1123 08:11:54.330058   18653 kubeadm.go:587] duration metric: took 13.147278847s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:11:54.330084   18653 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:11:54.336149   18653 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1123 08:11:54.336178   18653 node_conditions.go:123] node cpu capacity is 2
	I1123 08:11:54.336195   18653 node_conditions.go:105] duration metric: took 6.103954ms to run NodePressure ...
	I1123 08:11:54.336211   18653 start.go:242] waiting for startup goroutines ...
	I1123 08:11:54.572797   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:11:54.753927   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:11:54.781211   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:11:54.783690   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:11:55.069363   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:11:55.253431   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:11:55.278385   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:11:55.278513   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:11:55.579948   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:11:55.756953   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:11:55.779362   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:11:55.781401   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:11:56.070619   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:11:56.254558   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:11:56.280320   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:11:56.280661   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:11:56.567107   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:11:56.753024   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:11:56.779759   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:11:56.779974   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:11:57.070759   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:11:57.254455   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:11:57.281990   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:11:57.283434   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:11:57.568587   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:11:57.910111   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:11:57.910562   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:11:57.910694   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:11:58.069379   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:11:58.256492   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:11:58.358458   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:11:58.358573   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:11:58.567560   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:11:58.753765   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:11:58.777725   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:11:58.778898   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:11:59.067376   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:11:59.252786   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:11:59.277254   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:11:59.278642   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:11:59.567603   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:11:59.755559   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:11:59.780249   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:11:59.780895   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:00.068075   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:00.254225   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:00.281022   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:00.281189   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:00.570229   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:00.755070   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:00.781948   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:00.781987   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:01.069266   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:01.253146   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:01.284937   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:01.285198   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:01.568504   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:01.753174   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:01.785159   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:01.787095   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:02.067558   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:02.256646   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:02.280522   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:02.280910   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:02.570581   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:02.755539   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:02.780832   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:02.781158   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:03.068220   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:03.253105   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:03.285159   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:03.286043   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:03.567175   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:03.752846   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:03.783867   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:03.807095   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:04.068423   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:04.257031   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:04.278779   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:04.282840   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:04.567975   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:04.754897   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:04.782147   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:04.782693   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:05.069233   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:05.260665   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:05.279905   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:05.289433   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:05.570285   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:05.756646   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:05.781720   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:05.781992   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:06.084155   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:06.264449   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:06.293177   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:06.303247   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:06.570873   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:06.754892   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:06.780508   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:06.780683   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:07.424197   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:07.424844   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:07.425052   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:07.425167   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:07.574750   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:07.754441   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:07.778739   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:07.780542   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:08.067997   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:08.253123   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:08.279911   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:08.292102   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:08.571763   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:08.827958   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:08.828045   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:08.828281   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:09.068152   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:09.254586   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:09.281784   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:09.283025   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:09.569850   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:09.755672   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:09.781176   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:09.783940   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:10.069431   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:10.254510   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:10.278272   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:10.280765   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:10.569448   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:10.752783   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:10.781020   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:10.783039   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:11.069697   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:11.257120   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:11.282908   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:11.283554   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:11.567554   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:11.755889   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:11.781984   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:11.782143   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:12.068831   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:12.255349   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:12.280781   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:12.282573   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:12.570175   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:12.754125   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:12.780882   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:12.782813   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:13.357240   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:13.357376   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:13.359202   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:13.360870   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:13.567476   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:13.752960   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:13.781866   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:13.781925   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:14.070159   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:14.253422   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:14.278828   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:14.282480   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:14.570669   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:14.756654   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:14.781712   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:14.782724   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:15.071770   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:15.255955   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:15.280715   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:15.280768   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:15.575613   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:15.753256   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:15.780348   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:15.782534   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:16.195673   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:16.254363   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:16.283233   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:16.286979   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:16.568822   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:16.755006   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:16.782293   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:16.782344   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:17.070193   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:17.269826   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:17.279534   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:17.280056   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:17.660266   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:17.757884   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:17.778228   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:17.779089   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:18.072628   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:18.260409   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:18.360004   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:18.360128   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:18.568163   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:18.753421   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:18.778449   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:18.778841   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:19.067800   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:19.253358   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:19.279456   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:19.280667   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:19.566737   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:19.755176   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:19.778006   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:19.779789   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:20.067831   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:20.252974   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:20.278361   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:20.278763   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:20.569428   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:20.755419   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:20.779710   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:20.781806   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:21.069790   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:21.253862   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:21.277985   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:21.280045   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:21.567035   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:21.754960   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:21.777925   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:21.785371   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:22.069128   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:22.254717   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:22.278782   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:22.280675   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:22.570213   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:22.754963   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:22.778695   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:22.778795   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:23.069658   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:23.254508   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:23.283579   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:23.283799   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:23.569152   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:23.756436   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:23.781305   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:23.781435   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:24.070980   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:24.254951   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:24.279736   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:24.281698   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:24.568832   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:24.754164   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:24.779566   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:24.780258   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:25.068423   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:25.254588   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:25.284296   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:25.290666   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:25.700439   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:25.805679   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:25.806219   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:25.807912   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:26.068262   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:26.253383   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:26.277977   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:26.279878   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:26.567594   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:26.753796   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:26.781092   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:26.781092   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:27.070108   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:27.253975   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:27.278073   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:27.279243   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:27.570237   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:27.753821   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:27.783365   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:27.787026   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:28.067578   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:28.255572   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:28.279824   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:28.280149   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:28.567677   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:28.754310   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:28.779259   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:28.780958   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:29.066863   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:29.254048   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:29.285120   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:29.285833   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:29.568364   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:29.755484   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:29.779376   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:29.779538   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:12:30.071015   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:30.253664   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:30.277766   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:30.279795   18653 kapi.go:107] duration metric: took 40.005270475s to wait for kubernetes.io/minikube-addons=registry ...
	I1123 08:12:30.576113   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:30.756913   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:30.854768   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:31.067295   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:31.255589   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:31.278369   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:31.568752   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:31.758009   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:31.778264   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:32.109874   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:32.256360   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:32.280770   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:32.568756   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:32.758508   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:32.785131   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:33.067779   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:33.266493   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:33.282380   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:33.572134   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:33.754594   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:33.780988   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:34.067361   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:34.258381   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:34.278973   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:34.572513   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:34.752678   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:34.779107   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:35.070501   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:35.253227   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:35.279513   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:35.567749   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:35.757209   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:35.779942   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:36.067082   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:36.254246   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:36.288291   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:36.570290   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:36.753238   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:36.778775   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:37.066612   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:37.259263   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:37.280935   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:37.571914   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:37.759960   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:37.787555   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:38.068349   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:38.254225   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:38.281249   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:38.570949   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:38.753755   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:38.778834   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:39.076515   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:39.268013   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:39.285310   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:39.571174   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:39.753712   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:39.778850   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:40.070905   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:40.255364   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:40.355885   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:40.571119   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:40.756323   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:40.781918   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:41.070335   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:41.258367   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:41.281058   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:41.568093   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:41.753649   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:41.779312   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:42.068821   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:42.259397   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:42.280168   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:42.569078   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:42.752670   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:42.778879   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:43.067973   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:43.253615   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:43.279829   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:43.568910   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:43.753770   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:43.781796   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:44.070076   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:44.254942   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:44.284232   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:44.569905   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:44.754871   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:44.777797   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:45.067161   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:45.253116   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:12:45.278662   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:45.567912   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:45.753832   18653 kapi.go:107] duration metric: took 54.00477427s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1123 08:12:45.777813   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:46.068057   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:46.278906   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:46.566894   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:46.779172   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:47.067483   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:47.277741   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:47.569623   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:47.777867   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:48.067928   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:48.278481   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:48.567924   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:48.778781   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:49.067114   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:49.278638   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:49.568038   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:49.778503   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:50.068100   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:50.279750   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:50.567370   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:50.778867   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:51.069541   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:51.278576   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:51.568394   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:51.778581   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:52.068285   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:52.279251   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:52.568904   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:52.778873   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:53.067799   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:53.278362   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:53.567588   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:53.778347   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:54.068107   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:54.279910   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:54.567049   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:54.781487   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:55.070098   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:55.282623   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:55.570851   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:55.781131   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:56.067767   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:56.280511   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:56.568891   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:56.778092   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:57.068642   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:57.277565   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:57.570879   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:57.778630   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:58.069634   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:58.280608   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:58.570710   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:58.778657   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:59.075718   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:59.278372   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:12:59.570415   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:12:59.782091   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:13:00.068015   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:13:00.278425   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:13:00.570968   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:13:00.783037   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:13:01.202882   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:13:01.279683   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:13:01.568656   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:13:01.778862   18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:13:02.067377   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:13:02.278970   18653 kapi.go:107] duration metric: took 1m12.004393125s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1123 08:13:02.569671   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:13:03.071965   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:13:03.568091   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:13:04.069148   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:13:04.571399   18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:13:05.067408   18653 kapi.go:107] duration metric: took 1m11.50363415s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1123 08:13:05.068961   18653 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-964416 cluster.
	I1123 08:13:05.070210   18653 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1123 08:13:05.071483   18653 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1123 08:13:05.072830   18653 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, inspektor-gadget, storage-provisioner-rancher, registry-creds, ingress-dns, storage-provisioner, default-storageclass, nvidia-device-plugin, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1123 08:13:05.073991   18653 addons.go:530] duration metric: took 1m23.891178936s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin inspektor-gadget storage-provisioner-rancher registry-creds ingress-dns storage-provisioner default-storageclass nvidia-device-plugin metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1123 08:13:05.074043   18653 start.go:247] waiting for cluster config update ...
	I1123 08:13:05.074062   18653 start.go:256] writing updated cluster config ...
	I1123 08:13:05.074326   18653 ssh_runner.go:195] Run: rm -f paused
	I1123 08:13:05.081515   18653 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:13:05.085540   18653 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gxw2m" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:13:05.090989   18653 pod_ready.go:94] pod "coredns-66bc5c9577-gxw2m" is "Ready"
	I1123 08:13:05.091008   18653 pod_ready.go:86] duration metric: took 5.450504ms for pod "coredns-66bc5c9577-gxw2m" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:13:05.093678   18653 pod_ready.go:83] waiting for pod "etcd-addons-964416" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:13:05.098199   18653 pod_ready.go:94] pod "etcd-addons-964416" is "Ready"
	I1123 08:13:05.098219   18653 pod_ready.go:86] duration metric: took 4.519474ms for pod "etcd-addons-964416" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:13:05.100500   18653 pod_ready.go:83] waiting for pod "kube-apiserver-addons-964416" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:13:05.106213   18653 pod_ready.go:94] pod "kube-apiserver-addons-964416" is "Ready"
	I1123 08:13:05.106236   18653 pod_ready.go:86] duration metric: took 5.713706ms for pod "kube-apiserver-addons-964416" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:13:05.108546   18653 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-964416" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:13:05.485714   18653 pod_ready.go:94] pod "kube-controller-manager-addons-964416" is "Ready"
	I1123 08:13:05.485750   18653 pod_ready.go:86] duration metric: took 377.186648ms for pod "kube-controller-manager-addons-964416" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:13:05.690238   18653 pod_ready.go:83] waiting for pod "kube-proxy-cp69g" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:13:06.085885   18653 pod_ready.go:94] pod "kube-proxy-cp69g" is "Ready"
	I1123 08:13:06.085907   18653 pod_ready.go:86] duration metric: took 395.638395ms for pod "kube-proxy-cp69g" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:13:06.288019   18653 pod_ready.go:83] waiting for pod "kube-scheduler-addons-964416" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:13:06.685739   18653 pod_ready.go:94] pod "kube-scheduler-addons-964416" is "Ready"
	I1123 08:13:06.685774   18653 pod_ready.go:86] duration metric: took 397.732698ms for pod "kube-scheduler-addons-964416" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:13:06.685791   18653 pod_ready.go:40] duration metric: took 1.604246897s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:13:06.730388   18653 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:13:06.731934   18653 out.go:179] * Done! kubectl is now configured to use "addons-964416" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 08:16:11 addons-964416 conmon[12551]: conmon d5fc58698cf6b99dd082 <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach}
	Nov 23 08:16:11 addons-964416 conmon[12551]: conmon d5fc58698cf6b99dd082 <ndebug>: terminal_ctrl_fd: 12
	Nov 23 08:16:11 addons-964416 conmon[12551]: conmon d5fc58698cf6b99dd082 <ndebug>: winsz read side: 16, winsz write side: 17
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.152858037Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6772d6f2-b4d4-4de3-a225-5cfcf2927bc7 name=/runtime.v1.RuntimeService/Version
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.152923754Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6772d6f2-b4d4-4de3-a225-5cfcf2927bc7 name=/runtime.v1.RuntimeService/Version
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.154153950Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=39fe12f3-52fb-409c-995c-7c13d8e52369 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.161487401Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763885771161460598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39fe12f3-52fb-409c-995c-7c13d8e52369 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.162590443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27e0c1e9-bad3-4c18-bb4b-36370da2e1c9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.162894587Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27e0c1e9-bad3-4c18-bb4b-36370da2e1c9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.163591195Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:75e1b2efb20e18e880e59f64bc49b3114d53ccde7e613edc5b4615dc84fcd0a9,PodSandboxId:d6bf7a0c9178de5aeded44ed3172fe8a5fa37b1637e181238bae040a3132ac32,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763885628957386781,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16094845-c835-4494-a064-31053be1943b,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a13ad04ce6225dcf840a6cf8802f4ba866d65bb5e34d4bf31ab7e5f17e7b741,PodSandboxId:88d41c059de83448dda19955ce8fb31c3489bea24a5f320b367b9d98641ffebf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763885590231067098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5d5468e-0e81-49d7-8cef-aec9926db30e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fcfcb8eb46bc8c8ad289f19e3217cfb5ad9dfda4f22775c0a49639411e4285b,PodSandboxId:de8125f44caed4f5ad920cab9a0bb988de7dc2f16c74de14c821e650578a3134,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763885581360265657,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-d2lnn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e3e96657-f191-4984-ad8c-72b0ab056c55,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:e41b23988404444894d83351fae6c2b44e55b7c753a80b8e5fd0a5e0fca26d59,PodSandboxId:adcafe8d23dcc115ab2ac8cc000e1264c58b4386ebd44aae9b79294b3ce1c6ea,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1763885551420361747,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qjtrl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c548cf97-ddd2-4a1a-919e-311e39bd3833,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d0112d0fad9f107107787fcc4761e8dc0a95d6ec3ab85df4820ac9dcba53be,PodSandboxId:e6ecc919cd54c5979641c50f531570e7f0db93d499967b4365cf666601922407,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763885550605397523,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-n8xfv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50dea3aa-fc75-4df0-bb04-bd8fd77e7ff9,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de81c25adb70a06d2e34c5fd93db6dbd056e629e8c1a19d19296966543bb3794,PodSandboxId:3fed9ced7aa5025979185f08ffa8128f912cdbf3370def2dca177c87e468cb93,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763885536633541503,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc33e34a-7ac6-484c-b0a7-430085041ff4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d5ef951538ac924b333da325abf735f2e52434e3c5dab819290dc703c0fa9f,PodSandboxId:17af1c9dedcd0272d4ffcb547936e10b9b74d0c546f2751eb7944aeacf774f79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f70
9a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763885511479730934,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafc19a5-6c67-4faa-af77-b5dc63837928,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034b673ca1afef8954547ba3b46fd029c5a7e32e9cae3456c825536ee88059e6,PodSandboxId:b4dc96fcea260adda9b8ee394b9b2bb5c3afdf293214bf8627dd585930863e57,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a0
7c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763885509150592446,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-8vc9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8295884f-da88-49f2-9084-a9c8cfc1e4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c085c7e3c7e1d4180e3f556a2b13e400e1a3a39cd49b5d8a82e0e6cbb197ee2,PodSandboxId:4d8a9af25383570f4daeb138a79efa23e5ee969bce155aa8a528afeed7cce39e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a916
7fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763885502664071177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gxw2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c7ecbdf-e8c7-4ff9-9c2d-dc54c953605f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a32a377fb8fd085baa47ac0065a2a6c9b61233646d15f815186bfb912aaee0,PodSandboxId:e6983ca5f266bea92319da768810baecbeb05b50b53084f38979f587c025a089,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763885501513009089,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cp69g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b6331ff-3dfb-46c8-b853-3ac13fdd22cc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc7808cbafa35ac18cf85d26bfed95c36a01bfae4fee82ff44e13e37accb2fb,PodSandboxId:9b65b771852981bff123c7c64aea210a8b531e3f1a3e167c3fcdf73979a4e982,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763885489786436394,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-964416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa4d6f814c0c0a234c1829d41f9cc06b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f0364c26ba8a2ffe836cbcc6d72ce91fb1532b3629b02515db50a6d4b466dc0,PodSandboxId:fb568a606e43974dcf74554272588ec98d2a159da91a96197ac316a5aba04b2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763885489807444229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-964416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efc48a5f24208a1a403f153c19e9b10a,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\
":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39699de5a00c064cdca41c90eb8b78538e5879de76016c72552fe5d7db95d87e,PodSandboxId:5c6c286389717dc5b739c64240d1166c11fa677abe126f45c071987cabb0aafa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763885489772029608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-964416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3709f2b029d1230ca25347545eb530b,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e910ff123e32ce12c666332e542d611040ccdc568a9fc18717d44e9a60184ce,PodSandboxId:c6f20fa3ad6a2efc964c8e924a906253b4e17d98d581838dbe3aeb539efec671,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763885489739229669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-964416,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: b5bb7c82c50c3697588cc803d0c3e419,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27e0c1e9-bad3-4c18-bb4b-36370da2e1c9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 08:16:11 addons-964416 conmon[12551]: conmon d5fc58698cf6b99dd082 <ndebug>: container PID: 12564
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.185924166Z" level=debug msg="Received container pid: 12564" file="oci/runtime_oci.go:284" id=51f92f68-a295-441c-b4d9-a4d686c1d82f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.201590134Z" level=info msg="Created container d5fc58698cf6b99dd0822df09b23eec5303d8cc276f2686cec7e7e9451c8b9a7: default/hello-world-app-5d498dc89-4czrb/hello-world-app" file="server/container_create.go:491" id=51f92f68-a295-441c-b4d9-a4d686c1d82f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.201786552Z" level=debug msg="Response: &CreateContainerResponse{ContainerId:d5fc58698cf6b99dd0822df09b23eec5303d8cc276f2686cec7e7e9451c8b9a7,}" file="otel-collector/interceptors.go:74" id=51f92f68-a295-441c-b4d9-a4d686c1d82f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.203009299Z" level=debug msg="Request: &StartContainerRequest{ContainerId:d5fc58698cf6b99dd0822df09b23eec5303d8cc276f2686cec7e7e9451c8b9a7,}" file="otel-collector/interceptors.go:62" id=7384383d-3bab-464f-9a28-3c02251e8480 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.203139344Z" level=info msg="Starting container: d5fc58698cf6b99dd0822df09b23eec5303d8cc276f2686cec7e7e9451c8b9a7" file="server/container_start.go:21" id=7384383d-3bab-464f-9a28-3c02251e8480 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.211783459Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b402f1e5-77d2-4003-a9d4-58749b3e19cf name=/runtime.v1.RuntimeService/Version
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.211910323Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b402f1e5-77d2-4003-a9d4-58749b3e19cf name=/runtime.v1.RuntimeService/Version
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.214601323Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2af74257-4803-4512-90e4-d0c25522e3f9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.216184987Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763885771216158074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2af74257-4803-4512-90e4-d0c25522e3f9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.218915582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f1f90bb-c943-4cbe-ada3-290be07e07b3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.218975096Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f1f90bb-c943-4cbe-ada3-290be07e07b3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.219292381Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5fc58698cf6b99dd0822df09b23eec5303d8cc276f2686cec7e7e9451c8b9a7,PodSandboxId:f281d2831dd0b2a9dd27cbe28e438d3893facfdde33318d75e0e3112d2d7d992,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_CREATED,CreatedAt:1763885771130471452,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-4czrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 542a36d2-e7f4-4a68-8a14-d26c69029ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75e1b2efb20e18e880e59f64bc49b3114d53ccde7e613edc5b4615dc84fcd0a9,PodSandboxId:d6bf7a0c9178de5aeded44ed3172fe8a5fa37b1637e181238bae040a3132ac32,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763885628957386781,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16094845-c835-4494-a064-31053be1943b,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a13ad04ce6225dcf840a6cf8802f4ba866d65bb5e34d4bf31ab7e5f17e7b741,PodSandboxId:88d41c059de83448dda19955ce8fb31c3489bea24a5f320b367b9d98641ffebf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763885590231067098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5d5468e-0e81-49d7-8c
ef-aec9926db30e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fcfcb8eb46bc8c8ad289f19e3217cfb5ad9dfda4f22775c0a49639411e4285b,PodSandboxId:de8125f44caed4f5ad920cab9a0bb988de7dc2f16c74de14c821e650578a3134,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763885581360265657,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-d2lnn,io.kubernetes.pod.namespace: ingress-nginx,io.
kubernetes.pod.uid: e3e96657-f191-4984-ad8c-72b0ab056c55,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:e41b23988404444894d83351fae6c2b44e55b7c753a80b8e5fd0a5e0fca26d59,PodSandboxId:adcafe8d23dcc115ab2ac8cc000e1264c58b4386ebd44aae9b79294b3ce1c6ea,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifi
edImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763885551420361747,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qjtrl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c548cf97-ddd2-4a1a-919e-311e39bd3833,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d0112d0fad9f107107787fcc4761e8dc0a95d6ec3ab85df4820ac9dcba53be,PodSandboxId:e6ecc919cd54c5979641c50f531570e7f0db93d499967b4365cf666601922407,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763885550605397523,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-n8xfv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50dea3aa-fc75-4df0-bb04-bd8fd77e7ff9,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de81c25adb70a06d2e34c5fd93db6dbd056e629e8c1a19d19296966543bb3794,PodSandboxId:3fed9ced7aa5025979185f08ffa8128f912cdbf3370def2dca177c87e468cb93,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd34
6a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763885536633541503,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc33e34a-7ac6-484c-b0a7-430085041ff4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d5ef951538ac924b333da325abf735f2e52434e3c5dab819290dc703c0fa9f,PodSandboxId:17af1c9dedcd0272d4ffcb547936e10b9b74d0c546f2751eb7944aeacf774f79,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763885511479730934,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafc19a5-6c67-4faa-af77-b5dc63837928,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034b673ca1afef8954547ba3b46fd029c5a7e32e9cae3456c825536ee88059e6,PodSandboxId:b4dc96fcea260adda9b8ee394b9b2bb5c3afdf293214bf8627dd585930863e57,Metadata:&ContainerMetadata{Name:amd-gpu-dev
ice-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763885509150592446,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-8vc9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8295884f-da88-49f2-9084-a9c8cfc1e4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c085c7e3c7e1d4180e3f556a2b13e400e1a3a39cd49b5d8a82e0e6cbb197ee2,PodSandboxId:4d8a9af25383570f4daeb138a79efa23e5ee969bce155aa8a528afeed7cce39e,Meta
data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763885502664071177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gxw2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c7ecbdf-e8c7-4ff9-9c2d-dc54c953605f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a32a377fb8fd085baa47ac0065a2a6c9b61233646d15f815186bfb912aaee0,PodSandboxId:e6983ca5f266bea92319da768810baecbeb05b50b53084f38979f587c025a089,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763885501513009089,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cp69g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b6331ff-3dfb-46c8-b853-3ac13fdd22cc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc7808cbafa35ac18cf85d26bfed95c36a01bfae4fee82ff44e13e37accb2fb,PodSandboxId:9b65b771852981bff123c7c64aea210a8b531e3f1a3e167c3fcdf73979a4e982,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763885489786436394,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-964416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa4d6f814c0c0a234c1829d41f9cc06b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f0364c26ba8a2ffe836cbcc6d72ce91fb1532b3629b02515db50a6d4b466dc0,PodSandboxId:fb568a606e43974dcf74554272588ec98d2a159da91a96197ac316a5aba04b2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763885489807444229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-964416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efc48a5f24208a1a403f153c19e9b10a,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kube
rnetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39699de5a00c064cdca41c90eb8b78538e5879de76016c72552fe5d7db95d87e,PodSandboxId:5c6c286389717dc5b739c64240d1166c11fa677abe126f45c071987cabb0aafa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763885489772029608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-964416,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b3709f2b029d1230ca25347545eb530b,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e910ff123e32ce12c666332e542d611040ccdc568a9fc18717d44e9a60184ce,PodSandboxId:c6f20fa3ad6a2efc964c8e924a906253b4e17d98d581838dbe3aeb539efec671,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763885489739229669,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-964416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5bb7c82c50c3697588cc803d0c3e419,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f1f90bb-c943-4cbe-ada3-290be07e07b3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.224707843Z" level=info msg="Started container" PID=12564 containerID=d5fc58698cf6b99dd0822df09b23eec5303d8cc276f2686cec7e7e9451c8b9a7 description=default/hello-world-app-5d498dc89-4czrb/hello-world-app file="server/container_start.go:115" id=7384383d-3bab-464f-9a28-3c02251e8480 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f281d2831dd0b2a9dd27cbe28e438d3893facfdde33318d75e0e3112d2d7d992
	Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.240462934Z" level=debug msg="Response: &StartContainerResponse{}" file="otel-collector/interceptors.go:74" id=7384383d-3bab-464f-9a28-3c02251e8480 name=/runtime.v1.RuntimeService/StartContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	d5fc58698cf6b       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   f281d2831dd0b       hello-world-app-5d498dc89-4czrb            default
	75e1b2efb20e1       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                              2 minutes ago            Running             nginx                     0                   d6bf7a0c9178d       nginx                                      default
	3a13ad04ce622       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   88d41c059de83       busybox                                    default
	1fcfcb8eb46bc       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27             3 minutes ago            Running             controller                0                   de8125f44caed       ingress-nginx-controller-6c8bf45fb-d2lnn   ingress-nginx
	e41b239884044       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                             3 minutes ago            Exited              patch                     1                   adcafe8d23dcc       ingress-nginx-admission-patch-qjtrl        ingress-nginx
	53d0112d0fad9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   3 minutes ago            Exited              create                    0                   e6ecc919cd54c       ingress-nginx-admission-create-n8xfv       ingress-nginx
	de81c25adb70a       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago            Running             minikube-ingress-dns      0                   3fed9ced7aa50       kube-ingress-dns-minikube                  kube-system
	82d5ef951538a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago            Running             storage-provisioner       0                   17af1c9dedcd0       storage-provisioner                        kube-system
	034b673ca1afe       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago            Running             amd-gpu-device-plugin     0                   b4dc96fcea260       amd-gpu-device-plugin-8vc9q                kube-system
	6c085c7e3c7e1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago            Running             coredns                   0                   4d8a9af253835       coredns-66bc5c9577-gxw2m                   kube-system
	33a32a377fb8f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             4 minutes ago            Running             kube-proxy                0                   e6983ca5f266b       kube-proxy-cp69g                           kube-system
	7f0364c26ba8a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             4 minutes ago            Running             kube-scheduler            0                   fb568a606e439       kube-scheduler-addons-964416               kube-system
	9bc7808cbafa3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             4 minutes ago            Running             etcd                      0                   9b65b77185298       etcd-addons-964416                         kube-system
	39699de5a00c0       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             4 minutes ago            Running             kube-controller-manager   0                   5c6c286389717       kube-controller-manager-addons-964416      kube-system
	9e910ff123e32       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             4 minutes ago            Running             kube-apiserver            0                   c6f20fa3ad6a2       kube-apiserver-addons-964416               kube-system
	
	
	==> coredns [6c085c7e3c7e1d4180e3f556a2b13e400e1a3a39cd49b5d8a82e0e6cbb197ee2] <==
	[INFO] 10.244.0.8:34929 - 37132 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000132878s
	[INFO] 10.244.0.8:34929 - 47308 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000194175s
	[INFO] 10.244.0.8:34929 - 54448 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000310838s
	[INFO] 10.244.0.8:34929 - 46437 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000082882s
	[INFO] 10.244.0.8:34929 - 1722 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000150189s
	[INFO] 10.244.0.8:34929 - 14624 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000115232s
	[INFO] 10.244.0.8:34929 - 3595 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000197893s
	[INFO] 10.244.0.8:42096 - 49119 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000142563s
	[INFO] 10.244.0.8:42096 - 48771 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000299107s
	[INFO] 10.244.0.8:40663 - 3895 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000165114s
	[INFO] 10.244.0.8:40663 - 3668 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00025171s
	[INFO] 10.244.0.8:50148 - 57087 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092353s
	[INFO] 10.244.0.8:50148 - 56608 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000145897s
	[INFO] 10.244.0.8:40736 - 1872 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000096571s
	[INFO] 10.244.0.8:40736 - 1708 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000310129s
	[INFO] 10.244.0.23:33329 - 55512 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000384926s
	[INFO] 10.244.0.23:52635 - 27490 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00013239s
	[INFO] 10.244.0.23:49599 - 45231 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124792s
	[INFO] 10.244.0.23:43686 - 59657 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000084376s
	[INFO] 10.244.0.23:41900 - 50560 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000091879s
	[INFO] 10.244.0.23:49637 - 36509 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129987s
	[INFO] 10.244.0.23:58010 - 5775 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.00180098s
	[INFO] 10.244.0.23:42039 - 4844 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004089908s
	[INFO] 10.244.0.26:52134 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000411446s
	[INFO] 10.244.0.26:51430 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00012813s
	
	
	==> describe nodes <==
	Name:               addons-964416
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-964416
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=addons-964416
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_11_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-964416
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:11:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-964416
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:16:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:14:08 +0000   Sun, 23 Nov 2025 08:11:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:14:08 +0000   Sun, 23 Nov 2025 08:11:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:14:08 +0000   Sun, 23 Nov 2025 08:11:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:14:08 +0000   Sun, 23 Nov 2025 08:11:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    addons-964416
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 198921e33bb94b459dea69ff479a7843
	  System UUID:                198921e3-3bb9-4b45-9dea-69ff479a7843
	  Boot ID:                    ce72afb7-a3f6-4f51-b999-aef96396bed2
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	  default                     hello-world-app-5d498dc89-4czrb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-d2lnn    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m22s
	  kube-system                 amd-gpu-device-plugin-8vc9q                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 coredns-66bc5c9577-gxw2m                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m30s
	  kube-system                 etcd-addons-964416                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m36s
	  kube-system                 kube-apiserver-addons-964416                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 kube-controller-manager-addons-964416       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-proxy-cp69g                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-scheduler-addons-964416                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m28s                  kube-proxy       
	  Normal  Starting                 4m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m43s (x8 over 4m43s)  kubelet          Node addons-964416 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m43s (x8 over 4m43s)  kubelet          Node addons-964416 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m43s (x7 over 4m43s)  kubelet          Node addons-964416 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m36s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m36s                  kubelet          Node addons-964416 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m36s                  kubelet          Node addons-964416 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m36s                  kubelet          Node addons-964416 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m35s                  kubelet          Node addons-964416 status is now: NodeReady
	  Normal  RegisteredNode           4m32s                  node-controller  Node addons-964416 event: Registered Node addons-964416 in Controller
	
	
	==> dmesg <==
	[  +1.114548] kauditd_printk_skb: 321 callbacks suppressed
	[  +1.396067] kauditd_printk_skb: 344 callbacks suppressed
	[  +2.244772] kauditd_printk_skb: 347 callbacks suppressed
	[Nov23 08:12] kauditd_printk_skb: 20 callbacks suppressed
	[  +4.130214] kauditd_printk_skb: 23 callbacks suppressed
	[  +7.696609] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.272444] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.189814] kauditd_printk_skb: 152 callbacks suppressed
	[  +3.890564] kauditd_printk_skb: 91 callbacks suppressed
	[  +3.450258] kauditd_printk_skb: 120 callbacks suppressed
	[  +0.000089] kauditd_printk_skb: 20 callbacks suppressed
	[  +0.000159] kauditd_printk_skb: 29 callbacks suppressed
	[Nov23 08:13] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.499629] kauditd_printk_skb: 47 callbacks suppressed
	[ +10.549604] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.926217] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.589167] kauditd_printk_skb: 39 callbacks suppressed
	[  +0.000955] kauditd_printk_skb: 36 callbacks suppressed
	[  +0.960938] kauditd_printk_skb: 147 callbacks suppressed
	[  +2.480230] kauditd_printk_skb: 181 callbacks suppressed
	[  +0.000254] kauditd_printk_skb: 102 callbacks suppressed
	[Nov23 08:14] kauditd_printk_skb: 106 callbacks suppressed
	[  +0.000067] kauditd_printk_skb: 22 callbacks suppressed
	[  +7.861859] kauditd_printk_skb: 41 callbacks suppressed
	[Nov23 08:16] kauditd_printk_skb: 147 callbacks suppressed
	
	
	==> etcd [9bc7808cbafa35ac18cf85d26bfed95c36a01bfae4fee82ff44e13e37accb2fb] <==
	{"level":"info","ts":"2025-11-23T08:12:58.263562Z","caller":"traceutil/trace.go:172","msg":"trace[346416027] linearizableReadLoop","detail":"{readStateIndex:1207; appliedIndex:1207; }","duration":"114.232107ms","start":"2025-11-23T08:12:58.149315Z","end":"2025-11-23T08:12:58.263547Z","steps":["trace[346416027] 'read index received'  (duration: 114.225272ms)","trace[346416027] 'applied index is now lower than readState.Index'  (duration: 5.663µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:12:58.263681Z","caller":"traceutil/trace.go:172","msg":"trace[946567403] transaction","detail":"{read_only:false; response_revision:1176; number_of_response:1; }","duration":"201.411685ms","start":"2025-11-23T08:12:58.062260Z","end":"2025-11-23T08:12:58.263672Z","steps":["trace[946567403] 'process raft request'  (duration: 201.312454ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:12:58.263910Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.599451ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourceclaimtemplates\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T08:12:58.263994Z","caller":"traceutil/trace.go:172","msg":"trace[854540074] range","detail":"{range_begin:/registry/resourceclaimtemplates; range_end:; response_count:0; response_revision:1176; }","duration":"114.693979ms","start":"2025-11-23T08:12:58.149293Z","end":"2025-11-23T08:12:58.263987Z","steps":["trace[854540074] 'agreement among raft nodes before linearized reading'  (duration: 114.581576ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:13:01.194625Z","caller":"traceutil/trace.go:172","msg":"trace[78832867] linearizableReadLoop","detail":"{readStateIndex:1212; appliedIndex:1212; }","duration":"133.69601ms","start":"2025-11-23T08:13:01.060912Z","end":"2025-11-23T08:13:01.194608Z","steps":["trace[78832867] 'read index received'  (duration: 133.690869ms)","trace[78832867] 'applied index is now lower than readState.Index'  (duration: 4.23µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:13:01.195030Z","caller":"traceutil/trace.go:172","msg":"trace[1300969811] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"184.536657ms","start":"2025-11-23T08:13:01.010482Z","end":"2025-11-23T08:13:01.195018Z","steps":["trace[1300969811] 'process raft request'  (duration: 184.176165ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:13:01.195245Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.353424ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T08:13:01.195265Z","caller":"traceutil/trace.go:172","msg":"trace[1652862] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1180; }","duration":"134.382028ms","start":"2025-11-23T08:13:01.060877Z","end":"2025-11-23T08:13:01.195259Z","steps":["trace[1652862] 'agreement among raft nodes before linearized reading'  (duration: 134.078381ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:13:04.472297Z","caller":"traceutil/trace.go:172","msg":"trace[1756090634] transaction","detail":"{read_only:false; response_revision:1196; number_of_response:1; }","duration":"168.545941ms","start":"2025-11-23T08:13:04.303738Z","end":"2025-11-23T08:13:04.472283Z","steps":["trace[1756090634] 'process raft request'  (duration: 168.408372ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:13:31.556461Z","caller":"traceutil/trace.go:172","msg":"trace[53365910] linearizableReadLoop","detail":"{readStateIndex:1404; appliedIndex:1404; }","duration":"217.993552ms","start":"2025-11-23T08:13:31.338452Z","end":"2025-11-23T08:13:31.556446Z","steps":["trace[53365910] 'read index received'  (duration: 217.988589ms)","trace[53365910] 'applied index is now lower than readState.Index'  (duration: 4.355µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:13:31.556665Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"218.233943ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" limit:1 ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2025-11-23T08:13:31.556686Z","caller":"traceutil/trace.go:172","msg":"trace[118254718] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1365; }","duration":"218.269244ms","start":"2025-11-23T08:13:31.338410Z","end":"2025-11-23T08:13:31.556680Z","steps":["trace[118254718] 'agreement among raft nodes before linearized reading'  (duration: 218.144533ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:13:31.556745Z","caller":"traceutil/trace.go:172","msg":"trace[408040910] transaction","detail":"{read_only:false; response_revision:1366; number_of_response:1; }","duration":"288.6452ms","start":"2025-11-23T08:13:31.268083Z","end":"2025-11-23T08:13:31.556728Z","steps":["trace[408040910] 'process raft request'  (duration: 288.412942ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:13:42.824992Z","caller":"traceutil/trace.go:172","msg":"trace[1346268108] linearizableReadLoop","detail":"{readStateIndex:1509; appliedIndex:1509; }","duration":"180.761231ms","start":"2025-11-23T08:13:42.644217Z","end":"2025-11-23T08:13:42.824978Z","steps":["trace[1346268108] 'read index received'  (duration: 180.755347ms)","trace[1346268108] 'applied index is now lower than readState.Index'  (duration: 4.658µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:13:42.825120Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"180.889323ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T08:13:42.825140Z","caller":"traceutil/trace.go:172","msg":"trace[1689261646] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1466; }","duration":"180.921565ms","start":"2025-11-23T08:13:42.644212Z","end":"2025-11-23T08:13:42.825134Z","steps":["trace[1689261646] 'agreement among raft nodes before linearized reading'  (duration: 180.865956ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:13:42.826088Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.64829ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-11-23T08:13:42.826242Z","caller":"traceutil/trace.go:172","msg":"trace[934387840] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1467; }","duration":"102.809318ms","start":"2025-11-23T08:13:42.723425Z","end":"2025-11-23T08:13:42.826234Z","steps":["trace[934387840] 'agreement among raft nodes before linearized reading'  (duration: 102.588071ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:13:42.826347Z","caller":"traceutil/trace.go:172","msg":"trace[994045618] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1467; }","duration":"224.338124ms","start":"2025-11-23T08:13:42.601996Z","end":"2025-11-23T08:13:42.826335Z","steps":["trace[994045618] 'process raft request'  (duration: 223.675708ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:13:44.521336Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"212.355564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/yakd-dashboard/yakd-dashboard-5ff678cb9\" limit:1 ","response":"range_response_count:1 size:3621"}
	{"level":"info","ts":"2025-11-23T08:13:44.525548Z","caller":"traceutil/trace.go:172","msg":"trace[1367152762] range","detail":"{range_begin:/registry/replicasets/yakd-dashboard/yakd-dashboard-5ff678cb9; range_end:; response_count:1; response_revision:1499; }","duration":"216.563053ms","start":"2025-11-23T08:13:44.308964Z","end":"2025-11-23T08:13:44.525527Z","steps":["trace[1367152762] 'range keys from in-memory index tree'  (duration: 205.554285ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:13:44.522154Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.200821ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" limit:1 ","response":"range_response_count:1 size:1412"}
	{"level":"info","ts":"2025-11-23T08:13:44.528192Z","caller":"traceutil/trace.go:172","msg":"trace[1277894628] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1499; }","duration":"138.251632ms","start":"2025-11-23T08:13:44.389932Z","end":"2025-11-23T08:13:44.528183Z","steps":["trace[1277894628] 'range keys from in-memory index tree'  (duration: 132.11237ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:13:44.864965Z","caller":"traceutil/trace.go:172","msg":"trace[785793712] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1506; }","duration":"103.644121ms","start":"2025-11-23T08:13:44.761307Z","end":"2025-11-23T08:13:44.864951Z","steps":["trace[785793712] 'process raft request'  (duration: 103.490123ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:13:48.755944Z","caller":"traceutil/trace.go:172","msg":"trace[1931102737] transaction","detail":"{read_only:false; response_revision:1552; number_of_response:1; }","duration":"116.768955ms","start":"2025-11-23T08:13:48.639160Z","end":"2025-11-23T08:13:48.755929Z","steps":["trace[1931102737] 'process raft request'  (duration: 116.591819ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:16:11 up 5 min,  0 users,  load average: 0.67, 1.10, 0.58
	Linux addons-964416 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [9e910ff123e32ce12c666332e542d611040ccdc568a9fc18717d44e9a60184ce] <==
	W1123 08:12:09.822322       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 08:12:09.852214       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 08:12:09.898957       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1123 08:12:09.926132       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1123 08:13:17.611749       1 conn.go:339] Error on socket receive: read tcp 192.168.39.198:8443->192.168.39.1:42274: use of closed network connection
	I1123 08:13:26.928315       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.34.250"}
	I1123 08:13:45.778098       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1123 08:13:46.048254       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.133.87"}
	I1123 08:13:53.531513       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1123 08:14:08.484523       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E1123 08:14:09.917549       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1123 08:14:22.934753       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1123 08:14:22.935163       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1123 08:14:23.002373       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1123 08:14:23.002493       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1123 08:14:23.016104       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1123 08:14:23.016165       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1123 08:14:23.037056       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1123 08:14:23.037152       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1123 08:14:23.116975       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1123 08:14:23.117011       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1123 08:14:24.016591       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1123 08:14:24.117369       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1123 08:14:24.161684       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1123 08:16:09.988338       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.188.147"}
	
	
	==> kube-controller-manager [39699de5a00c064cdca41c90eb8b78538e5879de76016c72552fe5d7db95d87e] <==
	E1123 08:14:31.904649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1123 08:14:33.620685       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1123 08:14:33.621940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1123 08:14:39.225085       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1123 08:14:39.226548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1123 08:14:39.941436       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1123 08:14:39.941545       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:14:39.993205       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1123 08:14:39.993274       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1123 08:14:40.934382       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1123 08:14:40.936279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1123 08:14:42.837216       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1123 08:14:42.838281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1123 08:14:59.683130       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1123 08:14:59.684236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1123 08:15:00.738026       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1123 08:15:00.738944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1123 08:15:07.477705       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1123 08:15:07.479413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1123 08:15:33.465920       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1123 08:15:33.467191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1123 08:15:42.284918       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1123 08:15:42.286215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1123 08:15:48.254660       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1123 08:15:48.255727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [33a32a377fb8fd085baa47ac0065a2a6c9b61233646d15f815186bfb912aaee0] <==
	I1123 08:11:42.151021       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:11:42.252307       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:11:42.252355       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.198"]
	E1123 08:11:42.252424       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:11:42.542292       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1123 08:11:42.542359       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1123 08:11:42.542385       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:11:42.559573       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:11:42.561059       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:11:42.561093       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:11:42.577158       1 config.go:200] "Starting service config controller"
	I1123 08:11:42.578438       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:11:42.578716       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:11:42.578724       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:11:42.578978       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:11:42.578986       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:11:42.587794       1 config.go:309] "Starting node config controller"
	I1123 08:11:42.587976       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:11:42.679331       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:11:42.679399       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:11:42.679440       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:11:42.689078       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [7f0364c26ba8a2ffe836cbcc6d72ce91fb1532b3629b02515db50a6d4b466dc0] <==
	I1123 08:11:33.372123       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:11:33.372736       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:11:33.373161       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:11:33.372769       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1123 08:11:33.376216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 08:11:33.376324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:11:33.380328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:11:33.380529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:11:33.380601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:11:33.380654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:11:33.380687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:11:33.380743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:11:33.380778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:11:33.380889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:11:33.380916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:11:33.380998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:11:33.381620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:11:33.381629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:11:33.381732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:11:33.381799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:11:33.381847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:11:33.381950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:11:33.382226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:11:34.288353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1123 08:11:36.673912       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:14:36 addons-964416 kubelet[1501]: I1123 08:14:36.797555    1501 scope.go:117] "RemoveContainer" containerID="4febd74e4a681e8e173be6c68618b2fbf4f51353856d4df6171b3b4a79c388cd"
	Nov 23 08:14:36 addons-964416 kubelet[1501]: I1123 08:14:36.914451    1501 scope.go:117] "RemoveContainer" containerID="417a81c3c28cbe904ab69f5fbf17edb0898de82f819a56e9a0a9dda73a872883"
	Nov 23 08:14:37 addons-964416 kubelet[1501]: I1123 08:14:37.030888    1501 scope.go:117] "RemoveContainer" containerID="48e018b18ce521f79c9dcc1b911ab68af1b7a68e6427e053ab8b723ad07af9af"
	Nov 23 08:14:37 addons-964416 kubelet[1501]: I1123 08:14:37.151867    1501 scope.go:117] "RemoveContainer" containerID="f739082aca64711f4bb3e4a6759ba61e34f287d1886b3d6484e74aac69600482"
	Nov 23 08:14:45 addons-964416 kubelet[1501]: E1123 08:14:45.695723    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763885685695259039  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588566}  inodes_used:{value:201}}"
	Nov 23 08:14:45 addons-964416 kubelet[1501]: E1123 08:14:45.695751    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763885685695259039  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588566}  inodes_used:{value:201}}"
	Nov 23 08:14:55 addons-964416 kubelet[1501]: E1123 08:14:55.699101    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763885695698618394  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588566}  inodes_used:{value:201}}"
	Nov 23 08:14:55 addons-964416 kubelet[1501]: E1123 08:14:55.699169    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763885695698618394  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588566}  inodes_used:{value:201}}"
	Nov 23 08:15:05 addons-964416 kubelet[1501]: E1123 08:15:05.702892    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763885705702400592  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588566}  inodes_used:{value:201}}"
	Nov 23 08:15:05 addons-964416 kubelet[1501]: E1123 08:15:05.702925    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763885705702400592  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588566}  inodes_used:{value:201}}"
	Nov 23 08:15:15 addons-964416 kubelet[1501]: E1123 08:15:15.707342    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763885715706085904  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588566}  inodes_used:{value:201}}"
	Nov 23 08:15:15 addons-964416 kubelet[1501]: E1123 08:15:15.707393    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763885715706085904  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588566}  inodes_used:{value:201}}"
	Nov 23 08:15:25 addons-964416 kubelet[1501]: E1123 08:15:25.710185    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763885725709375223  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588566}  inodes_used:{value:201}}"
	Nov 23 08:15:25 addons-964416 kubelet[1501]: E1123 08:15:25.710396    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763885725709375223  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588566}  inodes_used:{value:201}}"
	Nov 23 08:15:31 addons-964416 kubelet[1501]: I1123 08:15:31.256646    1501 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 08:15:32 addons-964416 kubelet[1501]: I1123 08:15:32.255443    1501 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-8vc9q" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 08:15:35 addons-964416 kubelet[1501]: E1123 08:15:35.716524    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763885735716122413  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588566}  inodes_used:{value:201}}"
	Nov 23 08:15:35 addons-964416 kubelet[1501]: E1123 08:15:35.716545    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763885735716122413  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588566}  inodes_used:{value:201}}"
	Nov 23 08:15:45 addons-964416 kubelet[1501]: E1123 08:15:45.719632    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763885745719030901  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588566}  inodes_used:{value:201}}"
	Nov 23 08:15:45 addons-964416 kubelet[1501]: E1123 08:15:45.719679    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763885745719030901  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588566}  inodes_used:{value:201}}"
	Nov 23 08:15:55 addons-964416 kubelet[1501]: E1123 08:15:55.722350    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763885755721926782  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588566}  inodes_used:{value:201}}"
	Nov 23 08:15:55 addons-964416 kubelet[1501]: E1123 08:15:55.722396    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763885755721926782  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588566}  inodes_used:{value:201}}"
	Nov 23 08:16:05 addons-964416 kubelet[1501]: E1123 08:16:05.725299    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763885765724792008  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588566}  inodes_used:{value:201}}"
	Nov 23 08:16:05 addons-964416 kubelet[1501]: E1123 08:16:05.725344    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763885765724792008  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588566}  inodes_used:{value:201}}"
	Nov 23 08:16:10 addons-964416 kubelet[1501]: I1123 08:16:10.051401    1501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq2d6\" (UniqueName: \"kubernetes.io/projected/542a36d2-e7f4-4a68-8a14-d26c69029ccd-kube-api-access-vq2d6\") pod \"hello-world-app-5d498dc89-4czrb\" (UID: \"542a36d2-e7f4-4a68-8a14-d26c69029ccd\") " pod="default/hello-world-app-5d498dc89-4czrb"
	
	
	==> storage-provisioner [82d5ef951538ac924b333da325abf735f2e52434e3c5dab819290dc703c0fa9f] <==
	W1123 08:15:47.613227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:15:49.616726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:15:49.622571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:15:51.626423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:15:51.631760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:15:53.636351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:15:53.645038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:15:55.649575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:15:55.656158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:15:57.659480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:15:57.668064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:15:59.673057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:15:59.679089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:01.682521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:01.690760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:03.694616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:03.699719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:05.703550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:05.712982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:07.717251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:07.722744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:09.727660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:09.735082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:11.741064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:11.745934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-964416 -n addons-964416
helpers_test.go:269: (dbg) Run:  kubectl --context addons-964416 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-n8xfv ingress-nginx-admission-patch-qjtrl
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-964416 describe pod ingress-nginx-admission-create-n8xfv ingress-nginx-admission-patch-qjtrl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-964416 describe pod ingress-nginx-admission-create-n8xfv ingress-nginx-admission-patch-qjtrl: exit status 1 (60.34429ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-n8xfv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-qjtrl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-964416 describe pod ingress-nginx-admission-create-n8xfv ingress-nginx-admission-patch-qjtrl: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-964416 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-964416 addons disable ingress-dns --alsologtostderr -v=1: (1.784819733s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-964416 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-964416 addons disable ingress --alsologtostderr -v=1: (7.745813275s)
--- FAIL: TestAddons/parallel/Ingress (156.34s)

                                                
                                    
x
+
TestPreload (124.62s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-119969 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E1123 09:01:01.118903   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-119969 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m1.402133006s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-119969 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-119969 image pull gcr.io/k8s-minikube/busybox: (2.428197503s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-119969
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-119969: (6.826927594s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-119969 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-119969 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (51.360749087s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-119969 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-11-23 09:02:44.876575684 +0000 UTC m=+3118.060899670
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-119969 -n test-preload-119969
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-119969 logs -n 25
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-901565 ssh -n multinode-901565-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-901565     │ jenkins │ v1.37.0 │ 23 Nov 25 08:49 UTC │ 23 Nov 25 08:49 UTC │
	│ ssh     │ multinode-901565 ssh -n multinode-901565 sudo cat /home/docker/cp-test_multinode-901565-m03_multinode-901565.txt                                          │ multinode-901565     │ jenkins │ v1.37.0 │ 23 Nov 25 08:49 UTC │ 23 Nov 25 08:49 UTC │
	│ cp      │ multinode-901565 cp multinode-901565-m03:/home/docker/cp-test.txt multinode-901565-m02:/home/docker/cp-test_multinode-901565-m03_multinode-901565-m02.txt │ multinode-901565     │ jenkins │ v1.37.0 │ 23 Nov 25 08:49 UTC │ 23 Nov 25 08:49 UTC │
	│ ssh     │ multinode-901565 ssh -n multinode-901565-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-901565     │ jenkins │ v1.37.0 │ 23 Nov 25 08:49 UTC │ 23 Nov 25 08:49 UTC │
	│ ssh     │ multinode-901565 ssh -n multinode-901565-m02 sudo cat /home/docker/cp-test_multinode-901565-m03_multinode-901565-m02.txt                                  │ multinode-901565     │ jenkins │ v1.37.0 │ 23 Nov 25 08:49 UTC │ 23 Nov 25 08:49 UTC │
	│ node    │ multinode-901565 node stop m03                                                                                                                            │ multinode-901565     │ jenkins │ v1.37.0 │ 23 Nov 25 08:49 UTC │ 23 Nov 25 08:49 UTC │
	│ node    │ multinode-901565 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-901565     │ jenkins │ v1.37.0 │ 23 Nov 25 08:49 UTC │ 23 Nov 25 08:50 UTC │
	│ node    │ list -p multinode-901565                                                                                                                                  │ multinode-901565     │ jenkins │ v1.37.0 │ 23 Nov 25 08:50 UTC │                     │
	│ stop    │ -p multinode-901565                                                                                                                                       │ multinode-901565     │ jenkins │ v1.37.0 │ 23 Nov 25 08:50 UTC │ 23 Nov 25 08:53 UTC │
	│ start   │ -p multinode-901565 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-901565     │ jenkins │ v1.37.0 │ 23 Nov 25 08:53 UTC │ 23 Nov 25 08:55 UTC │
	│ node    │ list -p multinode-901565                                                                                                                                  │ multinode-901565     │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ node    │ multinode-901565 node delete m03                                                                                                                          │ multinode-901565     │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ stop    │ multinode-901565 stop                                                                                                                                     │ multinode-901565     │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p multinode-901565 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-901565     │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:59 UTC │
	│ node    │ list -p multinode-901565                                                                                                                                  │ multinode-901565     │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │                     │
	│ start   │ -p multinode-901565-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-901565-m02 │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ start   │ -p multinode-901565-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-901565-m03 │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ node    │ add -p multinode-901565                                                                                                                                   │ multinode-901565     │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ delete  │ -p multinode-901565-m03                                                                                                                                   │ multinode-901565-m03 │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ delete  │ -p multinode-901565                                                                                                                                       │ multinode-901565     │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ start   │ -p test-preload-119969 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-119969  │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:01 UTC │
	│ image   │ test-preload-119969 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-119969  │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ stop    │ -p test-preload-119969                                                                                                                                    │ test-preload-119969  │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ start   │ -p test-preload-119969 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-119969  │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:02 UTC │
	│ image   │ test-preload-119969 image list                                                                                                                            │ test-preload-119969  │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:01:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:01:53.379681   40218 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:01:53.379930   40218 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:01:53.379939   40218 out.go:374] Setting ErrFile to fd 2...
	I1123 09:01:53.379942   40218 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:01:53.380137   40218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-14048/.minikube/bin
	I1123 09:01:53.380584   40218 out.go:368] Setting JSON to false
	I1123 09:01:53.381370   40218 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6262,"bootTime":1763882251,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:01:53.381420   40218 start.go:143] virtualization: kvm guest
	I1123 09:01:53.383346   40218 out.go:179] * [test-preload-119969] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:01:53.384641   40218 notify.go:221] Checking for updates...
	I1123 09:01:53.384658   40218 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:01:53.385756   40218 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:01:53.387109   40218 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-14048/kubeconfig
	I1123 09:01:53.388257   40218 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-14048/.minikube
	I1123 09:01:53.389298   40218 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:01:53.390550   40218 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:01:53.392090   40218 config.go:182] Loaded profile config "test-preload-119969": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1123 09:01:53.393721   40218 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1123 09:01:53.394882   40218 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:01:53.429756   40218 out.go:179] * Using the kvm2 driver based on existing profile
	I1123 09:01:53.430796   40218 start.go:309] selected driver: kvm2
	I1123 09:01:53.430810   40218 start.go:927] validating driver "kvm2" against &{Name:test-preload-119969 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-119969 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:01:53.430910   40218 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:01:53.431881   40218 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:01:53.431917   40218 cni.go:84] Creating CNI manager for ""
	I1123 09:01:53.431987   40218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1123 09:01:53.432054   40218 start.go:353] cluster config:
	{Name:test-preload-119969 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-119969 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:01:53.432147   40218 iso.go:125] acquiring lock: {Name:mk4b6da1d874cbf82d9df128fb5e9a0d9b7ea794 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:01:53.433483   40218 out.go:179] * Starting "test-preload-119969" primary control-plane node in "test-preload-119969" cluster
	I1123 09:01:53.434670   40218 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1123 09:01:53.453612   40218 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1123 09:01:53.453640   40218 cache.go:65] Caching tarball of preloaded images
	I1123 09:01:53.453807   40218 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1123 09:01:53.455944   40218 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1123 09:01:53.456829   40218 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1123 09:01:53.481998   40218 preload.go:295] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1123 09:01:53.482043   40218 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21969-14048/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1123 09:02:02.245118   40218 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1123 09:02:02.245253   40218 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/test-preload-119969/config.json ...
	I1123 09:02:02.245512   40218 start.go:360] acquireMachinesLock for test-preload-119969: {Name:mk2573900f00f8e3cbe200607276d61a844e85b7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1123 09:02:02.245583   40218 start.go:364] duration metric: took 40.771µs to acquireMachinesLock for "test-preload-119969"
	I1123 09:02:02.245599   40218 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:02:02.245604   40218 fix.go:54] fixHost starting: 
	I1123 09:02:02.247625   40218 fix.go:112] recreateIfNeeded on test-preload-119969: state=Stopped err=<nil>
	W1123 09:02:02.247660   40218 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:02:02.249274   40218 out.go:252] * Restarting existing kvm2 VM for "test-preload-119969" ...
	I1123 09:02:02.249306   40218 main.go:143] libmachine: starting domain...
	I1123 09:02:02.249315   40218 main.go:143] libmachine: ensuring networks are active...
	I1123 09:02:02.250113   40218 main.go:143] libmachine: Ensuring network default is active
	I1123 09:02:02.250530   40218 main.go:143] libmachine: Ensuring network mk-test-preload-119969 is active
	I1123 09:02:02.250935   40218 main.go:143] libmachine: getting domain XML...
	I1123 09:02:02.252026   40218 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-119969</name>
	  <uuid>11b0449e-d7e2-4cb3-9703-1c98df343d57</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21969-14048/.minikube/machines/test-preload-119969/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21969-14048/.minikube/machines/test-preload-119969/test-preload-119969.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:15:4d:21'/>
	      <source network='mk-test-preload-119969'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:bc:a2:a3'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1123 09:02:02.645814   40218 main.go:143] libmachine: waiting for domain to start...
	I1123 09:02:02.647215   40218 main.go:143] libmachine: domain is now running
	I1123 09:02:02.647233   40218 main.go:143] libmachine: waiting for IP...
	I1123 09:02:02.648092   40218 main.go:143] libmachine: domain test-preload-119969 has defined MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:02.648713   40218 main.go:143] libmachine: domain test-preload-119969 has current primary IP address 192.168.39.141 and MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:02.648729   40218 main.go:143] libmachine: found domain IP: 192.168.39.141
	I1123 09:02:02.648736   40218 main.go:143] libmachine: reserving static IP address...
	I1123 09:02:02.649172   40218 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-119969", mac: "52:54:00:15:4d:21", ip: "192.168.39.141"} in network mk-test-preload-119969: {Iface:virbr1 ExpiryTime:2025-11-23 10:00:57 +0000 UTC Type:0 Mac:52:54:00:15:4d:21 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-119969 Clientid:01:52:54:00:15:4d:21}
	I1123 09:02:02.649204   40218 main.go:143] libmachine: skip adding static IP to network mk-test-preload-119969 - found existing host DHCP lease matching {name: "test-preload-119969", mac: "52:54:00:15:4d:21", ip: "192.168.39.141"}
	I1123 09:02:02.649233   40218 main.go:143] libmachine: reserved static IP address 192.168.39.141 for domain test-preload-119969
	I1123 09:02:02.649244   40218 main.go:143] libmachine: waiting for SSH...
	I1123 09:02:02.649252   40218 main.go:143] libmachine: Getting to WaitForSSH function...
	I1123 09:02:02.651410   40218 main.go:143] libmachine: domain test-preload-119969 has defined MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:02.651775   40218 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:4d:21", ip: ""} in network mk-test-preload-119969: {Iface:virbr1 ExpiryTime:2025-11-23 10:00:57 +0000 UTC Type:0 Mac:52:54:00:15:4d:21 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-119969 Clientid:01:52:54:00:15:4d:21}
	I1123 09:02:02.651800   40218 main.go:143] libmachine: domain test-preload-119969 has defined IP address 192.168.39.141 and MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:02.651948   40218 main.go:143] libmachine: Using SSH client type: native
	I1123 09:02:02.652163   40218 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1123 09:02:02.652175   40218 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1123 09:02:05.707701   40218 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.141:22: connect: no route to host
	I1123 09:02:11.787850   40218 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.141:22: connect: no route to host
	I1123 09:02:14.910407   40218 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:02:14.913936   40218 main.go:143] libmachine: domain test-preload-119969 has defined MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:14.914386   40218 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:4d:21", ip: ""} in network mk-test-preload-119969: {Iface:virbr1 ExpiryTime:2025-11-23 10:02:13 +0000 UTC Type:0 Mac:52:54:00:15:4d:21 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-119969 Clientid:01:52:54:00:15:4d:21}
	I1123 09:02:14.914411   40218 main.go:143] libmachine: domain test-preload-119969 has defined IP address 192.168.39.141 and MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:14.914650   40218 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/test-preload-119969/config.json ...
	I1123 09:02:14.914824   40218 machine.go:94] provisionDockerMachine start ...
	I1123 09:02:14.917265   40218 main.go:143] libmachine: domain test-preload-119969 has defined MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:14.917621   40218 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:4d:21", ip: ""} in network mk-test-preload-119969: {Iface:virbr1 ExpiryTime:2025-11-23 10:02:13 +0000 UTC Type:0 Mac:52:54:00:15:4d:21 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-119969 Clientid:01:52:54:00:15:4d:21}
	I1123 09:02:14.917644   40218 main.go:143] libmachine: domain test-preload-119969 has defined IP address 192.168.39.141 and MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:14.917787   40218 main.go:143] libmachine: Using SSH client type: native
	I1123 09:02:14.917970   40218 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1123 09:02:14.917979   40218 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:02:15.048505   40218 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1123 09:02:15.048542   40218 buildroot.go:166] provisioning hostname "test-preload-119969"
	I1123 09:02:15.051428   40218 main.go:143] libmachine: domain test-preload-119969 has defined MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:15.051826   40218 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:4d:21", ip: ""} in network mk-test-preload-119969: {Iface:virbr1 ExpiryTime:2025-11-23 10:02:13 +0000 UTC Type:0 Mac:52:54:00:15:4d:21 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-119969 Clientid:01:52:54:00:15:4d:21}
	I1123 09:02:15.051851   40218 main.go:143] libmachine: domain test-preload-119969 has defined IP address 192.168.39.141 and MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:15.052022   40218 main.go:143] libmachine: Using SSH client type: native
	I1123 09:02:15.052232   40218 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1123 09:02:15.052249   40218 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-119969 && echo "test-preload-119969" | sudo tee /etc/hostname
	I1123 09:02:15.205686   40218 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-119969
	
	I1123 09:02:15.208864   40218 main.go:143] libmachine: domain test-preload-119969 has defined MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:15.209355   40218 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:4d:21", ip: ""} in network mk-test-preload-119969: {Iface:virbr1 ExpiryTime:2025-11-23 10:02:13 +0000 UTC Type:0 Mac:52:54:00:15:4d:21 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-119969 Clientid:01:52:54:00:15:4d:21}
	I1123 09:02:15.209388   40218 main.go:143] libmachine: domain test-preload-119969 has defined IP address 192.168.39.141 and MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:15.209654   40218 main.go:143] libmachine: Using SSH client type: native
	I1123 09:02:15.209942   40218 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1123 09:02:15.209969   40218 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-119969' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-119969/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-119969' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:02:15.351288   40218 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:02:15.351313   40218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21969-14048/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-14048/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-14048/.minikube}
	I1123 09:02:15.351331   40218 buildroot.go:174] setting up certificates
	I1123 09:02:15.351340   40218 provision.go:84] configureAuth start
	I1123 09:02:15.354075   40218 main.go:143] libmachine: domain test-preload-119969 has defined MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:15.354577   40218 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:4d:21", ip: ""} in network mk-test-preload-119969: {Iface:virbr1 ExpiryTime:2025-11-23 10:02:13 +0000 UTC Type:0 Mac:52:54:00:15:4d:21 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-119969 Clientid:01:52:54:00:15:4d:21}
	I1123 09:02:15.354601   40218 main.go:143] libmachine: domain test-preload-119969 has defined IP address 192.168.39.141 and MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:15.357151   40218 main.go:143] libmachine: domain test-preload-119969 has defined MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:15.357541   40218 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:4d:21", ip: ""} in network mk-test-preload-119969: {Iface:virbr1 ExpiryTime:2025-11-23 10:02:13 +0000 UTC Type:0 Mac:52:54:00:15:4d:21 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-119969 Clientid:01:52:54:00:15:4d:21}
	I1123 09:02:15.357563   40218 main.go:143] libmachine: domain test-preload-119969 has defined IP address 192.168.39.141 and MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:15.357711   40218 provision.go:143] copyHostCerts
	I1123 09:02:15.357767   40218 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-14048/.minikube/cert.pem, removing ...
	I1123 09:02:15.357788   40218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-14048/.minikube/cert.pem
	I1123 09:02:15.357862   40218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-14048/.minikube/cert.pem (1123 bytes)
	I1123 09:02:15.358014   40218 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-14048/.minikube/key.pem, removing ...
	I1123 09:02:15.358025   40218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-14048/.minikube/key.pem
	I1123 09:02:15.358054   40218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-14048/.minikube/key.pem (1675 bytes)
	I1123 09:02:15.358133   40218 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-14048/.minikube/ca.pem, removing ...
	I1123 09:02:15.358141   40218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-14048/.minikube/ca.pem
	I1123 09:02:15.358164   40218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-14048/.minikube/ca.pem (1082 bytes)
	I1123 09:02:15.358228   40218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-14048/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca-key.pem org=jenkins.test-preload-119969 san=[127.0.0.1 192.168.39.141 localhost minikube test-preload-119969]
	I1123 09:02:15.445235   40218 provision.go:177] copyRemoteCerts
	I1123 09:02:15.445290   40218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:02:15.447886   40218 main.go:143] libmachine: domain test-preload-119969 has defined MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:15.448268   40218 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:4d:21", ip: ""} in network mk-test-preload-119969: {Iface:virbr1 ExpiryTime:2025-11-23 10:02:13 +0000 UTC Type:0 Mac:52:54:00:15:4d:21 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-119969 Clientid:01:52:54:00:15:4d:21}
	I1123 09:02:15.448302   40218 main.go:143] libmachine: domain test-preload-119969 has defined IP address 192.168.39.141 and MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:15.448443   40218 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/test-preload-119969/id_rsa Username:docker}
	I1123 09:02:15.543827   40218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 09:02:15.583058   40218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 09:02:15.622021   40218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:02:15.665875   40218 provision.go:87] duration metric: took 314.523953ms to configureAuth
	I1123 09:02:15.665898   40218 buildroot.go:189] setting minikube options for container-runtime
	I1123 09:02:15.666069   40218 config.go:182] Loaded profile config "test-preload-119969": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1123 09:02:15.668970   40218 main.go:143] libmachine: domain test-preload-119969 has defined MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:15.669389   40218 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:4d:21", ip: ""} in network mk-test-preload-119969: {Iface:virbr1 ExpiryTime:2025-11-23 10:02:13 +0000 UTC Type:0 Mac:52:54:00:15:4d:21 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-119969 Clientid:01:52:54:00:15:4d:21}
	I1123 09:02:15.669414   40218 main.go:143] libmachine: domain test-preload-119969 has defined IP address 192.168.39.141 and MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:15.669621   40218 main.go:143] libmachine: Using SSH client type: native
	I1123 09:02:15.669873   40218 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1123 09:02:15.669891   40218 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:02:15.929953   40218 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:02:15.929988   40218 machine.go:97] duration metric: took 1.015149801s to provisionDockerMachine
	I1123 09:02:15.929999   40218 start.go:293] postStartSetup for "test-preload-119969" (driver="kvm2")
	I1123 09:02:15.930009   40218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:02:15.930068   40218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:02:15.932875   40218 main.go:143] libmachine: domain test-preload-119969 has defined MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:15.933293   40218 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:4d:21", ip: ""} in network mk-test-preload-119969: {Iface:virbr1 ExpiryTime:2025-11-23 10:02:13 +0000 UTC Type:0 Mac:52:54:00:15:4d:21 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-119969 Clientid:01:52:54:00:15:4d:21}
	I1123 09:02:15.933317   40218 main.go:143] libmachine: domain test-preload-119969 has defined IP address 192.168.39.141 and MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:15.933494   40218 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/test-preload-119969/id_rsa Username:docker}
	I1123 09:02:16.021380   40218 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:02:16.026655   40218 info.go:137] Remote host: Buildroot 2025.02
	I1123 09:02:16.026685   40218 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-14048/.minikube/addons for local assets ...
	I1123 09:02:16.026754   40218 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-14048/.minikube/files for local assets ...
	I1123 09:02:16.026829   40218 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-14048/.minikube/files/etc/ssl/certs/180552.pem -> 180552.pem in /etc/ssl/certs
	I1123 09:02:16.026918   40218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:02:16.038259   40218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/files/etc/ssl/certs/180552.pem --> /etc/ssl/certs/180552.pem (1708 bytes)
	I1123 09:02:16.067937   40218 start.go:296] duration metric: took 137.923216ms for postStartSetup
	I1123 09:02:16.067985   40218 fix.go:56] duration metric: took 13.822379939s for fixHost
	I1123 09:02:16.070528   40218 main.go:143] libmachine: domain test-preload-119969 has defined MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:16.070866   40218 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:4d:21", ip: ""} in network mk-test-preload-119969: {Iface:virbr1 ExpiryTime:2025-11-23 10:02:13 +0000 UTC Type:0 Mac:52:54:00:15:4d:21 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-119969 Clientid:01:52:54:00:15:4d:21}
	I1123 09:02:16.070890   40218 main.go:143] libmachine: domain test-preload-119969 has defined IP address 192.168.39.141 and MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:16.071026   40218 main.go:143] libmachine: Using SSH client type: native
	I1123 09:02:16.071202   40218 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1123 09:02:16.071212   40218 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1123 09:02:16.185836   40218 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763888536.142067212
	
	I1123 09:02:16.185859   40218 fix.go:216] guest clock: 1763888536.142067212
	I1123 09:02:16.185866   40218 fix.go:229] Guest: 2025-11-23 09:02:16.142067212 +0000 UTC Remote: 2025-11-23 09:02:16.067990237 +0000 UTC m=+22.736497431 (delta=74.076975ms)
	I1123 09:02:16.185881   40218 fix.go:200] guest clock delta is within tolerance: 74.076975ms
	I1123 09:02:16.185885   40218 start.go:83] releasing machines lock for "test-preload-119969", held for 13.94029356s
	I1123 09:02:16.188625   40218 main.go:143] libmachine: domain test-preload-119969 has defined MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:16.188976   40218 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:4d:21", ip: ""} in network mk-test-preload-119969: {Iface:virbr1 ExpiryTime:2025-11-23 10:02:13 +0000 UTC Type:0 Mac:52:54:00:15:4d:21 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-119969 Clientid:01:52:54:00:15:4d:21}
	I1123 09:02:16.188999   40218 main.go:143] libmachine: domain test-preload-119969 has defined IP address 192.168.39.141 and MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:16.189501   40218 ssh_runner.go:195] Run: cat /version.json
	I1123 09:02:16.189519   40218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:02:16.192339   40218 main.go:143] libmachine: domain test-preload-119969 has defined MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:16.192642   40218 main.go:143] libmachine: domain test-preload-119969 has defined MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:16.192709   40218 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:4d:21", ip: ""} in network mk-test-preload-119969: {Iface:virbr1 ExpiryTime:2025-11-23 10:02:13 +0000 UTC Type:0 Mac:52:54:00:15:4d:21 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-119969 Clientid:01:52:54:00:15:4d:21}
	I1123 09:02:16.192741   40218 main.go:143] libmachine: domain test-preload-119969 has defined IP address 192.168.39.141 and MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:16.192883   40218 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/test-preload-119969/id_rsa Username:docker}
	I1123 09:02:16.193066   40218 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:4d:21", ip: ""} in network mk-test-preload-119969: {Iface:virbr1 ExpiryTime:2025-11-23 10:02:13 +0000 UTC Type:0 Mac:52:54:00:15:4d:21 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-119969 Clientid:01:52:54:00:15:4d:21}
	I1123 09:02:16.193089   40218 main.go:143] libmachine: domain test-preload-119969 has defined IP address 192.168.39.141 and MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:16.193260   40218 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/test-preload-119969/id_rsa Username:docker}
	I1123 09:02:16.275357   40218 ssh_runner.go:195] Run: systemctl --version
	I1123 09:02:16.299914   40218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:02:16.445227   40218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:02:16.452402   40218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:02:16.452482   40218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:02:16.472945   40218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 09:02:16.472967   40218 start.go:496] detecting cgroup driver to use...
	I1123 09:02:16.473025   40218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:02:16.491184   40218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:02:16.508081   40218 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:02:16.508140   40218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:02:16.525773   40218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:02:16.543776   40218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:02:16.693272   40218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:02:16.914948   40218 docker.go:234] disabling docker service ...
	I1123 09:02:16.915023   40218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:02:16.932576   40218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:02:16.948355   40218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:02:17.102430   40218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:02:17.244278   40218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:02:17.260347   40218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:02:17.282876   40218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1123 09:02:17.282934   40218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:17.295034   40218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 09:02:17.295093   40218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:17.307589   40218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:17.319691   40218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:17.331721   40218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:02:17.344105   40218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:17.356833   40218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:17.376965   40218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:17.389196   40218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:02:17.399521   40218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1123 09:02:17.399566   40218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1123 09:02:17.424680   40218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:02:17.437583   40218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:02:17.577648   40218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:02:17.697576   40218 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:02:17.697643   40218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:02:17.703281   40218 start.go:564] Will wait 60s for crictl version
	I1123 09:02:17.703333   40218 ssh_runner.go:195] Run: which crictl
	I1123 09:02:17.707483   40218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1123 09:02:17.743855   40218 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1123 09:02:17.743921   40218 ssh_runner.go:195] Run: crio --version
	I1123 09:02:17.773825   40218 ssh_runner.go:195] Run: crio --version
	I1123 09:02:17.806367   40218 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1123 09:02:17.810246   40218 main.go:143] libmachine: domain test-preload-119969 has defined MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:17.810671   40218 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:4d:21", ip: ""} in network mk-test-preload-119969: {Iface:virbr1 ExpiryTime:2025-11-23 10:02:13 +0000 UTC Type:0 Mac:52:54:00:15:4d:21 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-119969 Clientid:01:52:54:00:15:4d:21}
	I1123 09:02:17.810702   40218 main.go:143] libmachine: domain test-preload-119969 has defined IP address 192.168.39.141 and MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:17.810875   40218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1123 09:02:17.815975   40218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:02:17.831583   40218 kubeadm.go:884] updating cluster {Name:test-preload-119969 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-119969 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:02:17.831694   40218 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1123 09:02:17.831743   40218 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:02:17.867072   40218 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1123 09:02:17.867137   40218 ssh_runner.go:195] Run: which lz4
	I1123 09:02:17.871631   40218 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1123 09:02:17.876688   40218 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1123 09:02:17.876720   40218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1123 09:02:19.398384   40218 crio.go:462] duration metric: took 1.52677595s to copy over tarball
	I1123 09:02:19.398458   40218 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1123 09:02:21.056240   40218 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.657739171s)
	I1123 09:02:21.056267   40218 crio.go:469] duration metric: took 1.657854447s to extract the tarball
	I1123 09:02:21.056319   40218 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1123 09:02:21.096926   40218 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:02:21.139655   40218 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:02:21.139683   40218 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:02:21.139694   40218 kubeadm.go:935] updating node { 192.168.39.141 8443 v1.32.0 crio true true} ...
	I1123 09:02:21.139819   40218 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-119969 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-119969 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:02:21.139905   40218 ssh_runner.go:195] Run: crio config
	I1123 09:02:21.189783   40218 cni.go:84] Creating CNI manager for ""
	I1123 09:02:21.189809   40218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1123 09:02:21.189825   40218 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:02:21.189843   40218 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.141 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-119969 NodeName:test-preload-119969 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:02:21.189967   40218 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-119969"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.141"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.141"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:02:21.190028   40218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1123 09:02:21.203046   40218 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:02:21.203126   40218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:02:21.215155   40218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1123 09:02:21.235955   40218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:02:21.256517   40218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1123 09:02:21.277686   40218 ssh_runner.go:195] Run: grep 192.168.39.141	control-plane.minikube.internal$ /etc/hosts
	I1123 09:02:21.282076   40218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.141	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:02:21.296826   40218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:02:21.436748   40218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:02:21.468982   40218 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/test-preload-119969 for IP: 192.168.39.141
	I1123 09:02:21.469005   40218 certs.go:195] generating shared ca certs ...
	I1123 09:02:21.469023   40218 certs.go:227] acquiring lock for ca certs: {Name:mkaeb9dc4e066e858e41c686c8e5e48e63a99316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:02:21.469188   40218 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-14048/.minikube/ca.key
	I1123 09:02:21.469245   40218 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-14048/.minikube/proxy-client-ca.key
	I1123 09:02:21.469259   40218 certs.go:257] generating profile certs ...
	I1123 09:02:21.469348   40218 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/test-preload-119969/client.key
	I1123 09:02:21.469432   40218 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/test-preload-119969/apiserver.key.510d222d
	I1123 09:02:21.469518   40218 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/test-preload-119969/proxy-client.key
	I1123 09:02:21.469649   40218 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/18055.pem (1338 bytes)
	W1123 09:02:21.469693   40218 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-14048/.minikube/certs/18055_empty.pem, impossibly tiny 0 bytes
	I1123 09:02:21.469711   40218 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:02:21.469748   40218 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca.pem (1082 bytes)
	I1123 09:02:21.469784   40218 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:02:21.469824   40218 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/key.pem (1675 bytes)
	I1123 09:02:21.469897   40218 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-14048/.minikube/files/etc/ssl/certs/180552.pem (1708 bytes)
	I1123 09:02:21.470480   40218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:02:21.512157   40218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 09:02:21.551226   40218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:02:21.581964   40218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 09:02:21.612820   40218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/test-preload-119969/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 09:02:21.647062   40218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/test-preload-119969/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:02:21.676446   40218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/test-preload-119969/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:02:21.705636   40218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/test-preload-119969/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 09:02:21.734883   40218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:02:21.764549   40218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/certs/18055.pem --> /usr/share/ca-certificates/18055.pem (1338 bytes)
	I1123 09:02:21.793872   40218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/files/etc/ssl/certs/180552.pem --> /usr/share/ca-certificates/180552.pem (1708 bytes)
	I1123 09:02:21.823522   40218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:02:21.844531   40218 ssh_runner.go:195] Run: openssl version
	I1123 09:02:21.851298   40218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/180552.pem && ln -fs /usr/share/ca-certificates/180552.pem /etc/ssl/certs/180552.pem"
	I1123 09:02:21.865219   40218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/180552.pem
	I1123 09:02:21.870645   40218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:19 /usr/share/ca-certificates/180552.pem
	I1123 09:02:21.870707   40218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/180552.pem
	I1123 09:02:21.878046   40218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/180552.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:02:21.891675   40218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:02:21.905531   40218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:02:21.911037   40218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:11 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:02:21.911084   40218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:02:21.918433   40218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:02:21.932065   40218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18055.pem && ln -fs /usr/share/ca-certificates/18055.pem /etc/ssl/certs/18055.pem"
	I1123 09:02:21.945658   40218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18055.pem
	I1123 09:02:21.950892   40218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:19 /usr/share/ca-certificates/18055.pem
	I1123 09:02:21.950934   40218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18055.pem
	I1123 09:02:21.958173   40218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18055.pem /etc/ssl/certs/51391683.0"
	I1123 09:02:21.971785   40218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:02:21.977037   40218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:02:21.984332   40218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:02:21.991671   40218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:02:21.999156   40218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:02:22.006579   40218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:02:22.013949   40218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:02:22.021099   40218 kubeadm.go:401] StartCluster: {Name:test-preload-119969 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-119969 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:02:22.021210   40218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:02:22.021265   40218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:02:22.056802   40218 cri.go:89] found id: ""
	I1123 09:02:22.056884   40218 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:02:22.070021   40218 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:02:22.070045   40218 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:02:22.070104   40218 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:02:22.082007   40218 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:02:22.082420   40218 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-119969" does not appear in /home/jenkins/minikube-integration/21969-14048/kubeconfig
	I1123 09:02:22.082583   40218 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-14048/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-119969" cluster setting kubeconfig missing "test-preload-119969" context setting]
	I1123 09:02:22.082844   40218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/kubeconfig: {Name:mk15e2740703c77f3808fd0888f2d0465004dca1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:02:22.083362   40218 kapi.go:59] client config for test-preload-119969: &rest.Config{Host:"https://192.168.39.141:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-14048/.minikube/profiles/test-preload-119969/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-14048/.minikube/profiles/test-preload-119969/client.key", CAFile:"/home/jenkins/minikube-integration/21969-14048/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 09:02:22.083736   40218 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1123 09:02:22.083750   40218 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1123 09:02:22.083755   40218 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1123 09:02:22.083759   40218 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1123 09:02:22.083763   40218 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1123 09:02:22.084025   40218 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:02:22.096575   40218 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.141
	I1123 09:02:22.096606   40218 kubeadm.go:1161] stopping kube-system containers ...
	I1123 09:02:22.096618   40218 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1123 09:02:22.096673   40218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:02:22.131930   40218 cri.go:89] found id: ""
	I1123 09:02:22.132011   40218 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1123 09:02:22.150695   40218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 09:02:22.163025   40218 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 09:02:22.163051   40218 kubeadm.go:158] found existing configuration files:
	
	I1123 09:02:22.163099   40218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 09:02:22.174508   40218 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 09:02:22.174563   40218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 09:02:22.186540   40218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 09:02:22.197600   40218 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 09:02:22.197661   40218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 09:02:22.209559   40218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 09:02:22.220643   40218 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 09:02:22.220698   40218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 09:02:22.232685   40218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 09:02:22.243915   40218 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 09:02:22.243968   40218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 09:02:22.255944   40218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 09:02:22.268213   40218 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1123 09:02:22.324208   40218 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1123 09:02:23.243217   40218 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1123 09:02:23.501351   40218 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1123 09:02:23.573306   40218 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1123 09:02:23.639918   40218 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:02:23.640002   40218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:02:24.140978   40218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:02:24.640491   40218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:02:25.140149   40218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:02:25.640425   40218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:02:26.140731   40218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:02:26.181194   40218 api_server.go:72] duration metric: took 2.541285963s to wait for apiserver process to appear ...
	I1123 09:02:26.181222   40218 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:02:26.181243   40218 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I1123 09:02:29.299792   40218 api_server.go:279] https://192.168.39.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 09:02:29.299820   40218 api_server.go:103] status: https://192.168.39.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 09:02:29.299855   40218 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I1123 09:02:29.335400   40218 api_server.go:279] https://192.168.39.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 09:02:29.335426   40218 api_server.go:103] status: https://192.168.39.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 09:02:29.681983   40218 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I1123 09:02:29.686903   40218 api_server.go:279] https://192.168.39.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:02:29.686928   40218 api_server.go:103] status: https://192.168.39.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:02:30.181510   40218 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I1123 09:02:30.189297   40218 api_server.go:279] https://192.168.39.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:02:30.189320   40218 api_server.go:103] status: https://192.168.39.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:02:30.681712   40218 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I1123 09:02:30.687246   40218 api_server.go:279] https://192.168.39.141:8443/healthz returned 200:
	ok
	I1123 09:02:30.696880   40218 api_server.go:141] control plane version: v1.32.0
	I1123 09:02:30.696912   40218 api_server.go:131] duration metric: took 4.515682535s to wait for apiserver health ...
	I1123 09:02:30.696924   40218 cni.go:84] Creating CNI manager for ""
	I1123 09:02:30.696934   40218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1123 09:02:30.698611   40218 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1123 09:02:30.699813   40218 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1123 09:02:30.712239   40218 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1123 09:02:30.734852   40218 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:02:30.745571   40218 system_pods.go:59] 7 kube-system pods found
	I1123 09:02:30.745602   40218 system_pods.go:61] "coredns-668d6bf9bc-rx9bd" [4c94ce01-7e57-43e9-9078-897f35d047d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:02:30.745615   40218 system_pods.go:61] "etcd-test-preload-119969" [b1659b24-1eb2-43d0-950e-abcd4f020538] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:02:30.745623   40218 system_pods.go:61] "kube-apiserver-test-preload-119969" [ff05d887-39d0-4705-94d4-9629ce84101c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:02:30.745630   40218 system_pods.go:61] "kube-controller-manager-test-preload-119969" [290fd8c7-27ac-4007-af61-086fa1bf3500] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:02:30.745638   40218 system_pods.go:61] "kube-proxy-hsgck" [9265693b-fc28-4772-9251-0e408916c573] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:02:30.745646   40218 system_pods.go:61] "kube-scheduler-test-preload-119969" [e268d8e4-0075-4b06-8964-6ade16dda8ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:02:30.745651   40218 system_pods.go:61] "storage-provisioner" [27f4ee9e-7d45-49d8-b0c5-70f2e3cec2f3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:02:30.745658   40218 system_pods.go:74] duration metric: took 10.782234ms to wait for pod list to return data ...
	I1123 09:02:30.745669   40218 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:02:30.749365   40218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1123 09:02:30.749387   40218 node_conditions.go:123] node cpu capacity is 2
	I1123 09:02:30.749398   40218 node_conditions.go:105] duration metric: took 3.725711ms to run NodePressure ...
	I1123 09:02:30.749441   40218 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1123 09:02:31.023364   40218 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1123 09:02:31.027254   40218 kubeadm.go:744] kubelet initialised
	I1123 09:02:31.027276   40218 kubeadm.go:745] duration metric: took 3.889986ms waiting for restarted kubelet to initialise ...
	I1123 09:02:31.027289   40218 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 09:02:31.042892   40218 ops.go:34] apiserver oom_adj: -16
	I1123 09:02:31.042906   40218 kubeadm.go:602] duration metric: took 8.97285497s to restartPrimaryControlPlane
	I1123 09:02:31.042914   40218 kubeadm.go:403] duration metric: took 9.021824818s to StartCluster
	I1123 09:02:31.042933   40218 settings.go:142] acquiring lock: {Name:mkab6903339ca646213aa209a9d09b91734329a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:02:31.043005   40218 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-14048/kubeconfig
	I1123 09:02:31.043519   40218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/kubeconfig: {Name:mk15e2740703c77f3808fd0888f2d0465004dca1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:02:31.043731   40218 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:02:31.043823   40218 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:02:31.043918   40218 addons.go:70] Setting storage-provisioner=true in profile "test-preload-119969"
	I1123 09:02:31.043937   40218 addons.go:239] Setting addon storage-provisioner=true in "test-preload-119969"
	W1123 09:02:31.043951   40218 addons.go:248] addon storage-provisioner should already be in state true
	I1123 09:02:31.043958   40218 config.go:182] Loaded profile config "test-preload-119969": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1123 09:02:31.043983   40218 host.go:66] Checking if "test-preload-119969" exists ...
	I1123 09:02:31.043942   40218 addons.go:70] Setting default-storageclass=true in profile "test-preload-119969"
	I1123 09:02:31.044049   40218 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-119969"
	I1123 09:02:31.045226   40218 out.go:179] * Verifying Kubernetes components...
	I1123 09:02:31.046198   40218 kapi.go:59] client config for test-preload-119969: &rest.Config{Host:"https://192.168.39.141:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-14048/.minikube/profiles/test-preload-119969/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-14048/.minikube/profiles/test-preload-119969/client.key", CAFile:"/home/jenkins/minikube-integration/21969-14048/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 09:02:31.046431   40218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:02:31.046447   40218 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:02:31.046459   40218 addons.go:239] Setting addon default-storageclass=true in "test-preload-119969"
	W1123 09:02:31.046514   40218 addons.go:248] addon default-storageclass should already be in state true
	I1123 09:02:31.046533   40218 host.go:66] Checking if "test-preload-119969" exists ...
	I1123 09:02:31.047492   40218 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:02:31.047509   40218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:02:31.048138   40218 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:02:31.048156   40218 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:02:31.050478   40218 main.go:143] libmachine: domain test-preload-119969 has defined MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:31.050918   40218 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:4d:21", ip: ""} in network mk-test-preload-119969: {Iface:virbr1 ExpiryTime:2025-11-23 10:02:13 +0000 UTC Type:0 Mac:52:54:00:15:4d:21 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-119969 Clientid:01:52:54:00:15:4d:21}
	I1123 09:02:31.050934   40218 main.go:143] libmachine: domain test-preload-119969 has defined MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:31.050949   40218 main.go:143] libmachine: domain test-preload-119969 has defined IP address 192.168.39.141 and MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:31.051175   40218 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/test-preload-119969/id_rsa Username:docker}
	I1123 09:02:31.051508   40218 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:4d:21", ip: ""} in network mk-test-preload-119969: {Iface:virbr1 ExpiryTime:2025-11-23 10:02:13 +0000 UTC Type:0 Mac:52:54:00:15:4d:21 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-119969 Clientid:01:52:54:00:15:4d:21}
	I1123 09:02:31.051541   40218 main.go:143] libmachine: domain test-preload-119969 has defined IP address 192.168.39.141 and MAC address 52:54:00:15:4d:21 in network mk-test-preload-119969
	I1123 09:02:31.051697   40218 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/test-preload-119969/id_rsa Username:docker}
	I1123 09:02:31.238715   40218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:02:31.256802   40218 node_ready.go:35] waiting up to 6m0s for node "test-preload-119969" to be "Ready" ...
	I1123 09:02:31.462773   40218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:02:31.490292   40218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:02:32.190443   40218 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 09:02:32.191728   40218 addons.go:530] duration metric: took 1.147916061s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1123 09:02:33.260000   40218 node_ready.go:57] node "test-preload-119969" has "Ready":"False" status (will retry)
	W1123 09:02:35.260546   40218 node_ready.go:57] node "test-preload-119969" has "Ready":"False" status (will retry)
	W1123 09:02:37.760836   40218 node_ready.go:57] node "test-preload-119969" has "Ready":"False" status (will retry)
	I1123 09:02:39.760113   40218 node_ready.go:49] node "test-preload-119969" is "Ready"
	I1123 09:02:39.760142   40218 node_ready.go:38] duration metric: took 8.503286518s for node "test-preload-119969" to be "Ready" ...
	I1123 09:02:39.760157   40218 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:02:39.760216   40218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:02:39.781946   40218 api_server.go:72] duration metric: took 8.738185718s to wait for apiserver process to appear ...
	I1123 09:02:39.781970   40218 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:02:39.781986   40218 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I1123 09:02:39.786977   40218 api_server.go:279] https://192.168.39.141:8443/healthz returned 200:
	ok
	I1123 09:02:39.787826   40218 api_server.go:141] control plane version: v1.32.0
	I1123 09:02:39.787847   40218 api_server.go:131] duration metric: took 5.869602ms to wait for apiserver health ...
	I1123 09:02:39.787855   40218 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:02:39.791326   40218 system_pods.go:59] 7 kube-system pods found
	I1123 09:02:39.791351   40218 system_pods.go:61] "coredns-668d6bf9bc-rx9bd" [4c94ce01-7e57-43e9-9078-897f35d047d8] Running
	I1123 09:02:39.791359   40218 system_pods.go:61] "etcd-test-preload-119969" [b1659b24-1eb2-43d0-950e-abcd4f020538] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:02:39.791365   40218 system_pods.go:61] "kube-apiserver-test-preload-119969" [ff05d887-39d0-4705-94d4-9629ce84101c] Running
	I1123 09:02:39.791375   40218 system_pods.go:61] "kube-controller-manager-test-preload-119969" [290fd8c7-27ac-4007-af61-086fa1bf3500] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:02:39.791379   40218 system_pods.go:61] "kube-proxy-hsgck" [9265693b-fc28-4772-9251-0e408916c573] Running
	I1123 09:02:39.791383   40218 system_pods.go:61] "kube-scheduler-test-preload-119969" [e268d8e4-0075-4b06-8964-6ade16dda8ee] Running
	I1123 09:02:39.791388   40218 system_pods.go:61] "storage-provisioner" [27f4ee9e-7d45-49d8-b0c5-70f2e3cec2f3] Running
	I1123 09:02:39.791393   40218 system_pods.go:74] duration metric: took 3.533959ms to wait for pod list to return data ...
	I1123 09:02:39.791399   40218 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:02:39.793486   40218 default_sa.go:45] found service account: "default"
	I1123 09:02:39.793500   40218 default_sa.go:55] duration metric: took 2.096384ms for default service account to be created ...
	I1123 09:02:39.793507   40218 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:02:39.796819   40218 system_pods.go:86] 7 kube-system pods found
	I1123 09:02:39.796842   40218 system_pods.go:89] "coredns-668d6bf9bc-rx9bd" [4c94ce01-7e57-43e9-9078-897f35d047d8] Running
	I1123 09:02:39.796855   40218 system_pods.go:89] "etcd-test-preload-119969" [b1659b24-1eb2-43d0-950e-abcd4f020538] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:02:39.796861   40218 system_pods.go:89] "kube-apiserver-test-preload-119969" [ff05d887-39d0-4705-94d4-9629ce84101c] Running
	I1123 09:02:39.796872   40218 system_pods.go:89] "kube-controller-manager-test-preload-119969" [290fd8c7-27ac-4007-af61-086fa1bf3500] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:02:39.796878   40218 system_pods.go:89] "kube-proxy-hsgck" [9265693b-fc28-4772-9251-0e408916c573] Running
	I1123 09:02:39.796886   40218 system_pods.go:89] "kube-scheduler-test-preload-119969" [e268d8e4-0075-4b06-8964-6ade16dda8ee] Running
	I1123 09:02:39.796894   40218 system_pods.go:89] "storage-provisioner" [27f4ee9e-7d45-49d8-b0c5-70f2e3cec2f3] Running
	I1123 09:02:39.796912   40218 system_pods.go:126] duration metric: took 3.389774ms to wait for k8s-apps to be running ...
	I1123 09:02:39.796925   40218 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:02:39.796974   40218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:02:39.814705   40218 system_svc.go:56] duration metric: took 17.773116ms WaitForService to wait for kubelet
	I1123 09:02:39.814734   40218 kubeadm.go:587] duration metric: took 8.770976197s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:02:39.814754   40218 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:02:39.817078   40218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1123 09:02:39.817095   40218 node_conditions.go:123] node cpu capacity is 2
	I1123 09:02:39.817108   40218 node_conditions.go:105] duration metric: took 2.348311ms to run NodePressure ...
	I1123 09:02:39.817120   40218 start.go:242] waiting for startup goroutines ...
	I1123 09:02:39.817140   40218 start.go:247] waiting for cluster config update ...
	I1123 09:02:39.817155   40218 start.go:256] writing updated cluster config ...
	I1123 09:02:39.817423   40218 ssh_runner.go:195] Run: rm -f paused
	I1123 09:02:39.823393   40218 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:02:39.823810   40218 kapi.go:59] client config for test-preload-119969: &rest.Config{Host:"https://192.168.39.141:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-14048/.minikube/profiles/test-preload-119969/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-14048/.minikube/profiles/test-preload-119969/client.key", CAFile:"/home/jenkins/minikube-integration/21969-14048/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 09:02:39.827323   40218 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-rx9bd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:39.832053   40218 pod_ready.go:94] pod "coredns-668d6bf9bc-rx9bd" is "Ready"
	I1123 09:02:39.832072   40218 pod_ready.go:86] duration metric: took 4.730397ms for pod "coredns-668d6bf9bc-rx9bd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:39.835397   40218 pod_ready.go:83] waiting for pod "etcd-test-preload-119969" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:40.341722   40218 pod_ready.go:94] pod "etcd-test-preload-119969" is "Ready"
	I1123 09:02:40.341749   40218 pod_ready.go:86] duration metric: took 506.335992ms for pod "etcd-test-preload-119969" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:40.343779   40218 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-119969" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:40.348792   40218 pod_ready.go:94] pod "kube-apiserver-test-preload-119969" is "Ready"
	I1123 09:02:40.348819   40218 pod_ready.go:86] duration metric: took 5.007579ms for pod "kube-apiserver-test-preload-119969" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:40.351453   40218 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-119969" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:02:42.357543   40218 pod_ready.go:104] pod "kube-controller-manager-test-preload-119969" is not "Ready", error: <nil>
	I1123 09:02:43.857724   40218 pod_ready.go:94] pod "kube-controller-manager-test-preload-119969" is "Ready"
	I1123 09:02:43.857755   40218 pod_ready.go:86] duration metric: took 3.506270752s for pod "kube-controller-manager-test-preload-119969" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:43.860516   40218 pod_ready.go:83] waiting for pod "kube-proxy-hsgck" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:44.027890   40218 pod_ready.go:94] pod "kube-proxy-hsgck" is "Ready"
	I1123 09:02:44.027927   40218 pod_ready.go:86] duration metric: took 167.384551ms for pod "kube-proxy-hsgck" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:44.227101   40218 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-119969" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:44.626985   40218 pod_ready.go:94] pod "kube-scheduler-test-preload-119969" is "Ready"
	I1123 09:02:44.627014   40218 pod_ready.go:86] duration metric: took 399.874719ms for pod "kube-scheduler-test-preload-119969" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:44.627025   40218 pod_ready.go:40] duration metric: took 4.80360834s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:02:44.670998   40218 start.go:625] kubectl: 1.34.2, cluster: 1.32.0 (minor skew: 2)
	I1123 09:02:44.672410   40218 out.go:203] 
	W1123 09:02:44.673425   40218 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.32.0.
	I1123 09:02:44.674500   40218 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1123 09:02:44.675493   40218 out.go:179] * Done! kubectl is now configured to use "test-preload-119969" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.456453890Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763888565456431808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8349caef-287d-4127-bbb2-ede3ef05a6f8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.457375026Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f57c0f4-c29e-400d-aec1-fa89e6dc4091 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.457443659Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f57c0f4-c29e-400d-aec1-fa89e6dc4091 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.457669407Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1428732c9e9adcb14f8dd093c7ebdb5c82f928461380db52b555cba092f65ddf,PodSandboxId:89d5fecefeed43b0e0a179869fd3e032b8550085aa49e40bb0d1a4f0f9b6686c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763888557695349411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rx9bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c94ce01-7e57-43e9-9078-897f35d047d8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8fca3a2b3017efd70b968941b516a987e5189edba8c487af63fa41e5f7c4a05,PodSandboxId:8cfb16a45b0933d806c23894894ea4477805e709e86234f453def3129ce57c15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763888550025161056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hsgck,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9265693b-fc28-4772-9251-0e408916c573,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fce2d4b2755219393cd4b91be8687965e52df2c732d169685033211412ef1643,PodSandboxId:b4a0e4d30e4c5cfb0c1f623349ca8d991f3b8e85d28be4b9451ceccc270026bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763888550009317281,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
f4ee9e-7d45-49d8-b0c5-70f2e3cec2f3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03a5626acc0f2895fbbe11105fa32b4bf360bd4a2342b31f2a68500fb0bf23df,PodSandboxId:7bf87dc2ecd09902da01493d4ff8888f468ebfbee5d35bc2251b67aeb63b7d5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763888545840780930,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-119969,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: c9a75cdf6d96632fb81ff189977abca3,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186b96edf6ef8914da0f9cca5fc6a86c99f07e9f680409da0ee879f200ea96bc,PodSandboxId:c0e7b4c97d88e0ab1ecc47d4a4e9eb193b06dce976824104e5098d828c593268,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763888545835723618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-119969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9556d763010881e386f9852
bdd0ddcf5,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04699e712782a0d98116fddad4d277ef383db706f2def08d3b5469814f4fff90,PodSandboxId:4c06d975bc118b3af448d59351f7df04e61b30247bf4432304c5f45dc073a5bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763888545815156007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-119969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ca3d8d47e8a739fe0fa0137e77a0db0,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74b6dbbf1fcabd92e0b7b23ef4d5e4838110e86c92f0f6570c76acd89004f26,PodSandboxId:2c86e2f93ad4fc86a39f3f410ec398a5d67cab358273628cfb569fe17ed3d1b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763888545785285246,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-119969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb225f69e9d43f6fbdb6e9d05a7db75,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f57c0f4-c29e-400d-aec1-fa89e6dc4091 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.492808450Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d9f851a3-9e90-44d1-827d-684a94548e41 name=/runtime.v1.RuntimeService/Version
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.492896409Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9f851a3-9e90-44d1-827d-684a94548e41 name=/runtime.v1.RuntimeService/Version
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.495396170Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8e615f0-d5c6-49c5-b36f-eecefedb6ce3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.495929724Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763888565495908810,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8e615f0-d5c6-49c5-b36f-eecefedb6ce3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.496935906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86afabd9-48eb-45f1-b9ed-048306e92ab6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.497021929Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86afabd9-48eb-45f1-b9ed-048306e92ab6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.497169096Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1428732c9e9adcb14f8dd093c7ebdb5c82f928461380db52b555cba092f65ddf,PodSandboxId:89d5fecefeed43b0e0a179869fd3e032b8550085aa49e40bb0d1a4f0f9b6686c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763888557695349411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rx9bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c94ce01-7e57-43e9-9078-897f35d047d8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8fca3a2b3017efd70b968941b516a987e5189edba8c487af63fa41e5f7c4a05,PodSandboxId:8cfb16a45b0933d806c23894894ea4477805e709e86234f453def3129ce57c15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763888550025161056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hsgck,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9265693b-fc28-4772-9251-0e408916c573,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fce2d4b2755219393cd4b91be8687965e52df2c732d169685033211412ef1643,PodSandboxId:b4a0e4d30e4c5cfb0c1f623349ca8d991f3b8e85d28be4b9451ceccc270026bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763888550009317281,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
f4ee9e-7d45-49d8-b0c5-70f2e3cec2f3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03a5626acc0f2895fbbe11105fa32b4bf360bd4a2342b31f2a68500fb0bf23df,PodSandboxId:7bf87dc2ecd09902da01493d4ff8888f468ebfbee5d35bc2251b67aeb63b7d5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763888545840780930,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-119969,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: c9a75cdf6d96632fb81ff189977abca3,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186b96edf6ef8914da0f9cca5fc6a86c99f07e9f680409da0ee879f200ea96bc,PodSandboxId:c0e7b4c97d88e0ab1ecc47d4a4e9eb193b06dce976824104e5098d828c593268,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763888545835723618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-119969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9556d763010881e386f9852
bdd0ddcf5,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04699e712782a0d98116fddad4d277ef383db706f2def08d3b5469814f4fff90,PodSandboxId:4c06d975bc118b3af448d59351f7df04e61b30247bf4432304c5f45dc073a5bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763888545815156007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-119969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ca3d8d47e8a739fe0fa0137e77a0db0,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74b6dbbf1fcabd92e0b7b23ef4d5e4838110e86c92f0f6570c76acd89004f26,PodSandboxId:2c86e2f93ad4fc86a39f3f410ec398a5d67cab358273628cfb569fe17ed3d1b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763888545785285246,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-119969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb225f69e9d43f6fbdb6e9d05a7db75,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86afabd9-48eb-45f1-b9ed-048306e92ab6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.531750941Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a6e895ed-7102-4de7-ab9f-bf76232c6bf2 name=/runtime.v1.RuntimeService/Version
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.531824623Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a6e895ed-7102-4de7-ab9f-bf76232c6bf2 name=/runtime.v1.RuntimeService/Version
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.533434029Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8cfc9e4f-ba9e-4d82-bceb-a4e61035ebc0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.533915709Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763888565533893487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8cfc9e4f-ba9e-4d82-bceb-a4e61035ebc0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.535152148Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db4e6f7f-64ab-4010-ac34-b39f1f7fd26e name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.535254725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db4e6f7f-64ab-4010-ac34-b39f1f7fd26e name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.535453969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1428732c9e9adcb14f8dd093c7ebdb5c82f928461380db52b555cba092f65ddf,PodSandboxId:89d5fecefeed43b0e0a179869fd3e032b8550085aa49e40bb0d1a4f0f9b6686c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763888557695349411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rx9bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c94ce01-7e57-43e9-9078-897f35d047d8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8fca3a2b3017efd70b968941b516a987e5189edba8c487af63fa41e5f7c4a05,PodSandboxId:8cfb16a45b0933d806c23894894ea4477805e709e86234f453def3129ce57c15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763888550025161056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hsgck,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9265693b-fc28-4772-9251-0e408916c573,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fce2d4b2755219393cd4b91be8687965e52df2c732d169685033211412ef1643,PodSandboxId:b4a0e4d30e4c5cfb0c1f623349ca8d991f3b8e85d28be4b9451ceccc270026bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763888550009317281,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
f4ee9e-7d45-49d8-b0c5-70f2e3cec2f3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03a5626acc0f2895fbbe11105fa32b4bf360bd4a2342b31f2a68500fb0bf23df,PodSandboxId:7bf87dc2ecd09902da01493d4ff8888f468ebfbee5d35bc2251b67aeb63b7d5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763888545840780930,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-119969,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: c9a75cdf6d96632fb81ff189977abca3,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186b96edf6ef8914da0f9cca5fc6a86c99f07e9f680409da0ee879f200ea96bc,PodSandboxId:c0e7b4c97d88e0ab1ecc47d4a4e9eb193b06dce976824104e5098d828c593268,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763888545835723618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-119969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9556d763010881e386f9852
bdd0ddcf5,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04699e712782a0d98116fddad4d277ef383db706f2def08d3b5469814f4fff90,PodSandboxId:4c06d975bc118b3af448d59351f7df04e61b30247bf4432304c5f45dc073a5bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763888545815156007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-119969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ca3d8d47e8a739fe0fa0137e77a0db0,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74b6dbbf1fcabd92e0b7b23ef4d5e4838110e86c92f0f6570c76acd89004f26,PodSandboxId:2c86e2f93ad4fc86a39f3f410ec398a5d67cab358273628cfb569fe17ed3d1b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763888545785285246,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-119969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb225f69e9d43f6fbdb6e9d05a7db75,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=db4e6f7f-64ab-4010-ac34-b39f1f7fd26e name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.565963399Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd7ca4a3-6ca8-4f65-b591-3055e9ee0408 name=/runtime.v1.RuntimeService/Version
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.566062826Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd7ca4a3-6ca8-4f65-b591-3055e9ee0408 name=/runtime.v1.RuntimeService/Version
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.567582330Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5641c780-8da3-4300-9872-f661e790a1db name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.568565186Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763888565568542327,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5641c780-8da3-4300-9872-f661e790a1db name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.569527944Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8d2749c-d713-493f-8937-4ebef764f0cb name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.569579056Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8d2749c-d713-493f-8937-4ebef764f0cb name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:02:45 test-preload-119969 crio[828]: time="2025-11-23 09:02:45.569791893Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1428732c9e9adcb14f8dd093c7ebdb5c82f928461380db52b555cba092f65ddf,PodSandboxId:89d5fecefeed43b0e0a179869fd3e032b8550085aa49e40bb0d1a4f0f9b6686c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763888557695349411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rx9bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c94ce01-7e57-43e9-9078-897f35d047d8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8fca3a2b3017efd70b968941b516a987e5189edba8c487af63fa41e5f7c4a05,PodSandboxId:8cfb16a45b0933d806c23894894ea4477805e709e86234f453def3129ce57c15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763888550025161056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hsgck,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9265693b-fc28-4772-9251-0e408916c573,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fce2d4b2755219393cd4b91be8687965e52df2c732d169685033211412ef1643,PodSandboxId:b4a0e4d30e4c5cfb0c1f623349ca8d991f3b8e85d28be4b9451ceccc270026bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763888550009317281,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
f4ee9e-7d45-49d8-b0c5-70f2e3cec2f3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03a5626acc0f2895fbbe11105fa32b4bf360bd4a2342b31f2a68500fb0bf23df,PodSandboxId:7bf87dc2ecd09902da01493d4ff8888f468ebfbee5d35bc2251b67aeb63b7d5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763888545840780930,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-119969,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: c9a75cdf6d96632fb81ff189977abca3,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186b96edf6ef8914da0f9cca5fc6a86c99f07e9f680409da0ee879f200ea96bc,PodSandboxId:c0e7b4c97d88e0ab1ecc47d4a4e9eb193b06dce976824104e5098d828c593268,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763888545835723618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-119969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9556d763010881e386f9852
bdd0ddcf5,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04699e712782a0d98116fddad4d277ef383db706f2def08d3b5469814f4fff90,PodSandboxId:4c06d975bc118b3af448d59351f7df04e61b30247bf4432304c5f45dc073a5bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763888545815156007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-119969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ca3d8d47e8a739fe0fa0137e77a0db0,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74b6dbbf1fcabd92e0b7b23ef4d5e4838110e86c92f0f6570c76acd89004f26,PodSandboxId:2c86e2f93ad4fc86a39f3f410ec398a5d67cab358273628cfb569fe17ed3d1b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763888545785285246,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-119969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb225f69e9d43f6fbdb6e9d05a7db75,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8d2749c-d713-493f-8937-4ebef764f0cb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	1428732c9e9ad       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   7 seconds ago       Running             coredns                   1                   89d5fecefeed4       coredns-668d6bf9bc-rx9bd                      kube-system
	b8fca3a2b3017       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   15 seconds ago      Running             kube-proxy                1                   8cfb16a45b093       kube-proxy-hsgck                              kube-system
	fce2d4b275521       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   b4a0e4d30e4c5       storage-provisioner                           kube-system
	03a5626acc0f2       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   19 seconds ago      Running             kube-controller-manager   1                   7bf87dc2ecd09       kube-controller-manager-test-preload-119969   kube-system
	186b96edf6ef8       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   19 seconds ago      Running             etcd                      1                   c0e7b4c97d88e       etcd-test-preload-119969                      kube-system
	04699e712782a       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   19 seconds ago      Running             kube-scheduler            1                   4c06d975bc118       kube-scheduler-test-preload-119969            kube-system
	e74b6dbbf1fca       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   19 seconds ago      Running             kube-apiserver            1                   2c86e2f93ad4f       kube-apiserver-test-preload-119969            kube-system
	
	
	==> coredns [1428732c9e9adcb14f8dd093c7ebdb5c82f928461380db52b555cba092f65ddf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49345 - 47782 "HINFO IN 3565134843079725669.2342447663838331303. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.415185676s
	
	
	==> describe nodes <==
	Name:               test-preload-119969
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-119969
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=test-preload-119969
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_01_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:01:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-119969
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:02:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:02:39 +0000   Sun, 23 Nov 2025 09:01:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:02:39 +0000   Sun, 23 Nov 2025 09:01:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:02:39 +0000   Sun, 23 Nov 2025 09:01:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:02:39 +0000   Sun, 23 Nov 2025 09:02:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.141
	  Hostname:    test-preload-119969
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 11b0449ed7e24cb397031c98df343d57
	  System UUID:                11b0449e-d7e2-4cb3-9703-1c98df343d57
	  Boot ID:                    0ea2db38-74d2-4996-94da-02a078a63d43
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-rx9bd                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     71s
	  kube-system                 etcd-test-preload-119969                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         76s
	  kube-system                 kube-apiserver-test-preload-119969             250m (12%)    0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-controller-manager-test-preload-119969    200m (10%)    0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-proxy-hsgck                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-scheduler-test-preload-119969             100m (5%)     0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 71s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Normal   Starting                 77s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  76s                kubelet          Node test-preload-119969 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    76s                kubelet          Node test-preload-119969 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     76s                kubelet          Node test-preload-119969 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                75s                kubelet          Node test-preload-119969 status is now: NodeReady
	  Normal   RegisteredNode           72s                node-controller  Node test-preload-119969 event: Registered Node test-preload-119969 in Controller
	  Normal   Starting                 22s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-119969 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-119969 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-119969 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 16s                kubelet          Node test-preload-119969 has been rebooted, boot id: 0ea2db38-74d2-4996-94da-02a078a63d43
	  Normal   RegisteredNode           13s                node-controller  Node test-preload-119969 event: Registered Node test-preload-119969 in Controller
	
	
	==> dmesg <==
	[Nov23 09:02] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000047] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005378] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.930899] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.109377] kauditd_printk_skb: 88 callbacks suppressed
	[  +6.577106] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.000098] kauditd_printk_skb: 128 callbacks suppressed
	[  +0.028949] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [186b96edf6ef8914da0f9cca5fc6a86c99f07e9f680409da0ee879f200ea96bc] <==
	{"level":"info","ts":"2025-11-23T09:02:26.282171Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T09:02:26.289088Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"2398e045949c73cb","initial-advertise-peer-urls":["https://192.168.39.141:2380"],"listen-peer-urls":["https://192.168.39.141:2380"],"advertise-client-urls":["https://192.168.39.141:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.141:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T09:02:26.289199Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T09:02:26.274412Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T09:02:26.293747Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T09:02:26.274295Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-11-23T09:02:26.281565Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.141:2380"}
	{"level":"info","ts":"2025-11-23T09:02:26.293869Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.141:2380"}
	{"level":"info","ts":"2025-11-23T09:02:26.293896Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T09:02:28.148652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-23T09:02:28.148722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-23T09:02:28.148770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb received MsgPreVoteResp from 2398e045949c73cb at term 2"}
	{"level":"info","ts":"2025-11-23T09:02:28.148786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became candidate at term 3"}
	{"level":"info","ts":"2025-11-23T09:02:28.148796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb received MsgVoteResp from 2398e045949c73cb at term 3"}
	{"level":"info","ts":"2025-11-23T09:02:28.148814Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became leader at term 3"}
	{"level":"info","ts":"2025-11-23T09:02:28.148823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2398e045949c73cb elected leader 2398e045949c73cb at term 3"}
	{"level":"info","ts":"2025-11-23T09:02:28.150490Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"2398e045949c73cb","local-member-attributes":"{Name:test-preload-119969 ClientURLs:[https://192.168.39.141:2379]}","request-path":"/0/members/2398e045949c73cb/attributes","cluster-id":"bf8381628c3e4cea","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T09:02:28.150738Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T09:02:28.150690Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T09:02:28.151336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T09:02:28.151366Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T09:02:28.151989Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-23T09:02:28.152483Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.141:2379"}
	{"level":"info","ts":"2025-11-23T09:02:28.153277Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-23T09:02:28.153946Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:02:45 up 0 min,  0 users,  load average: 0.87, 0.23, 0.08
	Linux test-preload-119969 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [e74b6dbbf1fcabd92e0b7b23ef4d5e4838110e86c92f0f6570c76acd89004f26] <==
	I1123 09:02:29.397706       1 aggregator.go:171] initial CRD sync complete...
	I1123 09:02:29.397749       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 09:02:29.397766       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:02:29.397781       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:02:29.400325       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1123 09:02:29.400435       1 policy_source.go:240] refreshing policies
	I1123 09:02:29.459380       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1123 09:02:29.459819       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1123 09:02:29.460445       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 09:02:29.461014       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 09:02:29.461817       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 09:02:29.462260       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 09:02:29.462285       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 09:02:29.462490       1 shared_informer.go:320] Caches are synced for configmaps
	I1123 09:02:29.465010       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 09:02:29.468291       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:02:29.639018       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1123 09:02:30.269867       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:02:30.850813       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1123 09:02:30.886020       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1123 09:02:30.918717       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:02:30.925008       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:02:32.578969       1 controller.go:615] quota admission added evaluator for: endpoints
	I1123 09:02:32.930568       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1123 09:02:32.978074       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [03a5626acc0f2895fbbe11105fa32b4bf360bd4a2342b31f2a68500fb0bf23df] <==
	I1123 09:02:32.595011       1 shared_informer.go:320] Caches are synced for namespace
	I1123 09:02:32.597338       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1123 09:02:32.597581       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1123 09:02:32.598802       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-119969"
	I1123 09:02:32.598846       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1123 09:02:32.598857       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1123 09:02:32.603283       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1123 09:02:32.606241       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1123 09:02:32.611496       1 shared_informer.go:320] Caches are synced for disruption
	I1123 09:02:32.622879       1 shared_informer.go:320] Caches are synced for garbage collector
	I1123 09:02:32.625687       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1123 09:02:32.626116       1 shared_informer.go:320] Caches are synced for daemon sets
	I1123 09:02:32.626442       1 shared_informer.go:320] Caches are synced for deployment
	I1123 09:02:32.627460       1 shared_informer.go:320] Caches are synced for stateful set
	I1123 09:02:32.627501       1 shared_informer.go:320] Caches are synced for attach detach
	I1123 09:02:32.627774       1 shared_informer.go:320] Caches are synced for HPA
	I1123 09:02:32.627813       1 shared_informer.go:320] Caches are synced for job
	I1123 09:02:32.936767       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="330.461135ms"
	I1123 09:02:32.937560       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="217.565µs"
	I1123 09:02:38.781908       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="52.328µs"
	I1123 09:02:38.821420       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.617542ms"
	I1123 09:02:38.821503       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="37.918µs"
	I1123 09:02:39.723687       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-119969"
	I1123 09:02:39.739815       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-119969"
	I1123 09:02:42.577513       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b8fca3a2b3017efd70b968941b516a987e5189edba8c487af63fa41e5f7c4a05] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1123 09:02:30.249540       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1123 09:02:30.301019       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.141"]
	E1123 09:02:30.311514       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:02:30.443477       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1123 09:02:30.443689       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1123 09:02:30.443907       1 server_linux.go:170] "Using iptables Proxier"
	I1123 09:02:30.451370       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:02:30.451745       1 server.go:497] "Version info" version="v1.32.0"
	I1123 09:02:30.451794       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:02:30.455271       1 config.go:199] "Starting service config controller"
	I1123 09:02:30.455323       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1123 09:02:30.455395       1 config.go:105] "Starting endpoint slice config controller"
	I1123 09:02:30.455412       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1123 09:02:30.457822       1 config.go:329] "Starting node config controller"
	I1123 09:02:30.457864       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1123 09:02:30.555856       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1123 09:02:30.555910       1 shared_informer.go:320] Caches are synced for service config
	I1123 09:02:30.558302       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [04699e712782a0d98116fddad4d277ef383db706f2def08d3b5469814f4fff90] <==
	I1123 09:02:26.741843       1 serving.go:386] Generated self-signed cert in-memory
	W1123 09:02:29.331243       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 09:02:29.331319       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 09:02:29.331331       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 09:02:29.331341       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 09:02:29.382584       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1123 09:02:29.382689       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:02:29.386980       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:02:29.387062       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1123 09:02:29.389010       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1123 09:02:29.389308       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:02:29.488018       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 09:02:29 test-preload-119969 kubelet[1160]: E1123 09:02:29.433331    1160 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-119969\" already exists" pod="kube-system/etcd-test-preload-119969"
	Nov 23 09:02:29 test-preload-119969 kubelet[1160]: I1123 09:02:29.433367    1160 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-119969"
	Nov 23 09:02:29 test-preload-119969 kubelet[1160]: E1123 09:02:29.442798    1160 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-119969\" already exists" pod="kube-system/kube-apiserver-test-preload-119969"
	Nov 23 09:02:29 test-preload-119969 kubelet[1160]: I1123 09:02:29.442841    1160 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-119969"
	Nov 23 09:02:29 test-preload-119969 kubelet[1160]: E1123 09:02:29.453036    1160 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-119969\" already exists" pod="kube-system/kube-controller-manager-test-preload-119969"
	Nov 23 09:02:29 test-preload-119969 kubelet[1160]: I1123 09:02:29.574283    1160 apiserver.go:52] "Watching apiserver"
	Nov 23 09:02:29 test-preload-119969 kubelet[1160]: E1123 09:02:29.581799    1160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-rx9bd" podUID="4c94ce01-7e57-43e9-9078-897f35d047d8"
	Nov 23 09:02:29 test-preload-119969 kubelet[1160]: I1123 09:02:29.587226    1160 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Nov 23 09:02:29 test-preload-119969 kubelet[1160]: I1123 09:02:29.631439    1160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9265693b-fc28-4772-9251-0e408916c573-xtables-lock\") pod \"kube-proxy-hsgck\" (UID: \"9265693b-fc28-4772-9251-0e408916c573\") " pod="kube-system/kube-proxy-hsgck"
	Nov 23 09:02:29 test-preload-119969 kubelet[1160]: I1123 09:02:29.631566    1160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9265693b-fc28-4772-9251-0e408916c573-lib-modules\") pod \"kube-proxy-hsgck\" (UID: \"9265693b-fc28-4772-9251-0e408916c573\") " pod="kube-system/kube-proxy-hsgck"
	Nov 23 09:02:29 test-preload-119969 kubelet[1160]: I1123 09:02:29.631635    1160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/27f4ee9e-7d45-49d8-b0c5-70f2e3cec2f3-tmp\") pod \"storage-provisioner\" (UID: \"27f4ee9e-7d45-49d8-b0c5-70f2e3cec2f3\") " pod="kube-system/storage-provisioner"
	Nov 23 09:02:29 test-preload-119969 kubelet[1160]: E1123 09:02:29.631907    1160 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 23 09:02:29 test-preload-119969 kubelet[1160]: E1123 09:02:29.633674    1160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c94ce01-7e57-43e9-9078-897f35d047d8-config-volume podName:4c94ce01-7e57-43e9-9078-897f35d047d8 nodeName:}" failed. No retries permitted until 2025-11-23 09:02:30.131947186 +0000 UTC m=+6.663379491 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4c94ce01-7e57-43e9-9078-897f35d047d8-config-volume") pod "coredns-668d6bf9bc-rx9bd" (UID: "4c94ce01-7e57-43e9-9078-897f35d047d8") : object "kube-system"/"coredns" not registered
	Nov 23 09:02:30 test-preload-119969 kubelet[1160]: E1123 09:02:30.136184    1160 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 23 09:02:30 test-preload-119969 kubelet[1160]: E1123 09:02:30.136279    1160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c94ce01-7e57-43e9-9078-897f35d047d8-config-volume podName:4c94ce01-7e57-43e9-9078-897f35d047d8 nodeName:}" failed. No retries permitted until 2025-11-23 09:02:31.136258144 +0000 UTC m=+7.667690456 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4c94ce01-7e57-43e9-9078-897f35d047d8-config-volume") pod "coredns-668d6bf9bc-rx9bd" (UID: "4c94ce01-7e57-43e9-9078-897f35d047d8") : object "kube-system"/"coredns" not registered
	Nov 23 09:02:31 test-preload-119969 kubelet[1160]: E1123 09:02:31.142162    1160 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 23 09:02:31 test-preload-119969 kubelet[1160]: E1123 09:02:31.142247    1160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c94ce01-7e57-43e9-9078-897f35d047d8-config-volume podName:4c94ce01-7e57-43e9-9078-897f35d047d8 nodeName:}" failed. No retries permitted until 2025-11-23 09:02:33.142233111 +0000 UTC m=+9.673665416 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4c94ce01-7e57-43e9-9078-897f35d047d8-config-volume") pod "coredns-668d6bf9bc-rx9bd" (UID: "4c94ce01-7e57-43e9-9078-897f35d047d8") : object "kube-system"/"coredns" not registered
	Nov 23 09:02:31 test-preload-119969 kubelet[1160]: E1123 09:02:31.635895    1160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-rx9bd" podUID="4c94ce01-7e57-43e9-9078-897f35d047d8"
	Nov 23 09:02:33 test-preload-119969 kubelet[1160]: E1123 09:02:33.157258    1160 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 23 09:02:33 test-preload-119969 kubelet[1160]: E1123 09:02:33.157347    1160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c94ce01-7e57-43e9-9078-897f35d047d8-config-volume podName:4c94ce01-7e57-43e9-9078-897f35d047d8 nodeName:}" failed. No retries permitted until 2025-11-23 09:02:37.157333419 +0000 UTC m=+13.688765726 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4c94ce01-7e57-43e9-9078-897f35d047d8-config-volume") pod "coredns-668d6bf9bc-rx9bd" (UID: "4c94ce01-7e57-43e9-9078-897f35d047d8") : object "kube-system"/"coredns" not registered
	Nov 23 09:02:33 test-preload-119969 kubelet[1160]: E1123 09:02:33.638091    1160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-rx9bd" podUID="4c94ce01-7e57-43e9-9078-897f35d047d8"
	Nov 23 09:02:33 test-preload-119969 kubelet[1160]: E1123 09:02:33.654256    1160 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763888553653125352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 23 09:02:33 test-preload-119969 kubelet[1160]: E1123 09:02:33.654840    1160 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763888553653125352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 23 09:02:43 test-preload-119969 kubelet[1160]: E1123 09:02:43.657370    1160 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763888563657037096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 23 09:02:43 test-preload-119969 kubelet[1160]: E1123 09:02:43.657415    1160 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763888563657037096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [fce2d4b2755219393cd4b91be8687965e52df2c732d169685033211412ef1643] <==
	I1123 09:02:30.104362       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-119969 -n test-preload-119969
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-119969 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-119969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-119969
--- FAIL: TestPreload (124.62s)

                                                
                                    

Test pass (309/351)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.42
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 3.05
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.15
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.62
22 TestOffline 50.19
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 128.75
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 10.57
35 TestAddons/parallel/Registry 16.93
36 TestAddons/parallel/RegistryCreds 0.7
38 TestAddons/parallel/InspektorGadget 10.86
39 TestAddons/parallel/MetricsServer 5.99
41 TestAddons/parallel/CSI 64.19
42 TestAddons/parallel/Headlamp 19.35
43 TestAddons/parallel/CloudSpanner 5.57
44 TestAddons/parallel/LocalPath 53.66
45 TestAddons/parallel/NvidiaDevicePlugin 6.52
46 TestAddons/parallel/Yakd 10.93
48 TestAddons/StoppedEnableDisable 87.88
49 TestCertOptions 58.03
50 TestCertExpiration 503.15
52 TestForceSystemdFlag 77.87
53 TestForceSystemdEnv 58.44
58 TestErrorSpam/setup 39.57
59 TestErrorSpam/start 0.31
60 TestErrorSpam/status 0.65
61 TestErrorSpam/pause 1.49
62 TestErrorSpam/unpause 1.68
63 TestErrorSpam/stop 73.63
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 88.62
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 53.61
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.05
75 TestFunctional/serial/CacheCmd/cache/add_local 1.04
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.49
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 33.27
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.27
86 TestFunctional/serial/LogsFileCmd 1.3
87 TestFunctional/serial/InvalidService 4.51
89 TestFunctional/parallel/ConfigCmd 0.41
90 TestFunctional/parallel/DashboardCmd 14.41
91 TestFunctional/parallel/DryRun 0.21
92 TestFunctional/parallel/InternationalLanguage 0.11
93 TestFunctional/parallel/StatusCmd 0.71
97 TestFunctional/parallel/ServiceCmdConnect 8.48
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 38.99
101 TestFunctional/parallel/SSHCmd 0.36
102 TestFunctional/parallel/CpCmd 1.15
103 TestFunctional/parallel/MySQL 21.2
104 TestFunctional/parallel/FileSync 0.18
105 TestFunctional/parallel/CertSync 1.27
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.37
113 TestFunctional/parallel/License 0.26
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.57
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.19
120 TestFunctional/parallel/ImageCommands/ImageBuild 6.15
121 TestFunctional/parallel/ImageCommands/Setup 0.41
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.27
126 TestFunctional/parallel/ServiceCmd/DeployApp 19.23
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.15
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.36
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.68
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.86
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.33
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.44
142 TestFunctional/parallel/ServiceCmd/List 0.41
143 TestFunctional/parallel/ServiceCmd/JSONOutput 0.39
144 TestFunctional/parallel/ServiceCmd/HTTPS 0.23
145 TestFunctional/parallel/ServiceCmd/Format 0.22
146 TestFunctional/parallel/ServiceCmd/URL 0.25
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
148 TestFunctional/parallel/ProfileCmd/profile_list 0.31
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
150 TestFunctional/parallel/MountCmd/any-port 7.86
151 TestFunctional/parallel/MountCmd/specific-port 1.51
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.27
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 192.4
161 TestMultiControlPlane/serial/DeployApp 6.34
162 TestMultiControlPlane/serial/PingHostFromPods 1.32
163 TestMultiControlPlane/serial/AddWorkerNode 43.62
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.69
166 TestMultiControlPlane/serial/CopyFile 10.5
167 TestMultiControlPlane/serial/StopSecondaryNode 89.01
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.52
169 TestMultiControlPlane/serial/RestartSecondaryNode 35.2
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.81
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 374.75
172 TestMultiControlPlane/serial/DeleteSecondaryNode 17.85
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.5
174 TestMultiControlPlane/serial/StopCluster 244.15
175 TestMultiControlPlane/serial/RestartCluster 101.14
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.51
177 TestMultiControlPlane/serial/AddSecondaryNode 72.7
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.68
183 TestJSONOutput/start/Command 77.52
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.7
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.61
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 6.84
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.22
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 78.14
215 TestMountStart/serial/StartWithMountFirst 19.28
216 TestMountStart/serial/VerifyMountFirst 0.28
217 TestMountStart/serial/StartWithMountSecond 21.94
218 TestMountStart/serial/VerifyMountSecond 0.3
219 TestMountStart/serial/DeleteFirst 0.53
220 TestMountStart/serial/VerifyMountPostDelete 0.3
221 TestMountStart/serial/Stop 1.22
222 TestMountStart/serial/RestartStopped 16.95
223 TestMountStart/serial/VerifyMountPostStop 0.3
226 TestMultiNode/serial/FreshStart2Nodes 97.02
227 TestMultiNode/serial/DeployApp2Nodes 5.02
228 TestMultiNode/serial/PingHostFrom2Pods 0.83
229 TestMultiNode/serial/AddNode 41.01
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.46
232 TestMultiNode/serial/CopyFile 5.88
233 TestMultiNode/serial/StopNode 2.42
234 TestMultiNode/serial/StartAfterStop 38.3
235 TestMultiNode/serial/RestartKeepsNodes 306.48
236 TestMultiNode/serial/DeleteNode 2.5
237 TestMultiNode/serial/StopMultiNode 172.57
238 TestMultiNode/serial/RestartMultiNode 86.55
239 TestMultiNode/serial/ValidateNameConflict 41.47
246 TestScheduledStopUnix 108.36
250 TestRunningBinaryUpgrade 115.9
252 TestKubernetesUpgrade 218.61
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/StartWithK8s 95.15
257 TestStoppedBinaryUpgrade/Setup 0.47
258 TestStoppedBinaryUpgrade/Upgrade 141.23
259 TestNoKubernetes/serial/StartWithStopK8s 48.99
260 TestNoKubernetes/serial/Start 43.61
268 TestNetworkPlugins/group/false 6.26
272 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
273 TestNoKubernetes/serial/VerifyK8sNotRunning 0.17
274 TestNoKubernetes/serial/ProfileList 8.47
275 TestNoKubernetes/serial/Stop 1.21
276 TestNoKubernetes/serial/StartNoArgs 50.22
277 TestStoppedBinaryUpgrade/MinikubeLogs 1.08
279 TestPause/serial/Start 76.18
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
288 TestISOImage/Setup 27.59
289 TestPause/serial/SecondStartNoReconfiguration 42.07
291 TestISOImage/Binaries/crictl 0.17
292 TestISOImage/Binaries/curl 0.17
293 TestISOImage/Binaries/docker 0.18
294 TestISOImage/Binaries/git 0.18
295 TestISOImage/Binaries/iptables 0.19
296 TestISOImage/Binaries/podman 0.19
297 TestISOImage/Binaries/rsync 0.18
298 TestISOImage/Binaries/socat 0.17
299 TestISOImage/Binaries/wget 0.17
300 TestISOImage/Binaries/VBoxControl 0.18
301 TestISOImage/Binaries/VBoxService 0.18
302 TestNetworkPlugins/group/auto/Start 88.24
303 TestNetworkPlugins/group/kindnet/Start 80.61
304 TestPause/serial/Pause 0.86
305 TestPause/serial/VerifyStatus 0.24
306 TestPause/serial/Unpause 0.77
307 TestPause/serial/PauseAgain 1.39
308 TestPause/serial/DeletePaused 0.84
309 TestPause/serial/VerifyDeletedResources 5.39
310 TestNetworkPlugins/group/calico/Start 72.03
311 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
312 TestNetworkPlugins/group/auto/KubeletFlags 0.17
313 TestNetworkPlugins/group/auto/NetCatPod 11.27
314 TestNetworkPlugins/group/kindnet/KubeletFlags 0.18
315 TestNetworkPlugins/group/kindnet/NetCatPod 11.3
316 TestNetworkPlugins/group/auto/DNS 0.17
317 TestNetworkPlugins/group/auto/Localhost 0.15
318 TestNetworkPlugins/group/auto/HairPin 0.15
319 TestNetworkPlugins/group/kindnet/DNS 0.2
320 TestNetworkPlugins/group/kindnet/Localhost 0.14
321 TestNetworkPlugins/group/kindnet/HairPin 0.15
322 TestNetworkPlugins/group/custom-flannel/Start 73.77
323 TestNetworkPlugins/group/enable-default-cni/Start 103.06
324 TestNetworkPlugins/group/calico/ControllerPod 6.15
325 TestNetworkPlugins/group/calico/KubeletFlags 0.3
326 TestNetworkPlugins/group/calico/NetCatPod 10.44
327 TestNetworkPlugins/group/calico/DNS 0.18
328 TestNetworkPlugins/group/calico/Localhost 0.13
329 TestNetworkPlugins/group/calico/HairPin 0.14
330 TestNetworkPlugins/group/flannel/Start 80.87
331 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.43
332 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.78
333 TestNetworkPlugins/group/custom-flannel/DNS 0.16
334 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
335 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
336 TestNetworkPlugins/group/bridge/Start 57.19
337 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.19
338 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.26
339 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
340 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
341 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
342 TestNetworkPlugins/group/flannel/ControllerPod 6.01
343 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
344 TestNetworkPlugins/group/flannel/NetCatPod 12.31
346 TestStartStop/group/old-k8s-version/serial/FirstStart 56.69
347 TestNetworkPlugins/group/flannel/DNS 0.16
348 TestNetworkPlugins/group/flannel/Localhost 0.16
349 TestNetworkPlugins/group/flannel/HairPin 0.15
351 TestStartStop/group/no-preload/serial/FirstStart 102.38
352 TestNetworkPlugins/group/bridge/KubeletFlags 0.42
353 TestNetworkPlugins/group/bridge/NetCatPod 13.63
354 TestNetworkPlugins/group/bridge/DNS 0.16
355 TestNetworkPlugins/group/bridge/Localhost 0.13
356 TestNetworkPlugins/group/bridge/HairPin 0.15
358 TestStartStop/group/embed-certs/serial/FirstStart 82.7
359 TestStartStop/group/old-k8s-version/serial/DeployApp 10.35
360 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.38
361 TestStartStop/group/old-k8s-version/serial/Stop 90.5
362 TestStartStop/group/no-preload/serial/DeployApp 10.3
363 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
365 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 75.85
366 TestStartStop/group/no-preload/serial/Stop 89.12
367 TestStartStop/group/embed-certs/serial/DeployApp 9.3
368 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.97
369 TestStartStop/group/embed-certs/serial/Stop 83.47
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
371 TestStartStop/group/old-k8s-version/serial/SecondStart 43.83
372 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 15
373 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.29
374 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
375 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
376 TestStartStop/group/default-k8s-diff-port/serial/Stop 77.95
377 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.14
378 TestStartStop/group/no-preload/serial/SecondStart 59.75
379 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
380 TestStartStop/group/old-k8s-version/serial/Pause 2.66
382 TestStartStop/group/newest-cni/serial/FirstStart 62.77
383 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
384 TestStartStop/group/embed-certs/serial/SecondStart 77.83
385 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
386 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
387 TestStartStop/group/newest-cni/serial/DeployApp 0
388 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.11
389 TestStartStop/group/newest-cni/serial/Stop 89.17
390 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
391 TestStartStop/group/no-preload/serial/Pause 2.7
392 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.16
393 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 46.61
395 TestISOImage/PersistentMounts//data 0.18
396 TestISOImage/PersistentMounts//var/lib/docker 0.19
397 TestISOImage/PersistentMounts//var/lib/cni 0.17
398 TestISOImage/PersistentMounts//var/lib/kubelet 0.19
399 TestISOImage/PersistentMounts//var/lib/minikube 0.18
400 TestISOImage/PersistentMounts//var/lib/toolbox 0.19
401 TestISOImage/PersistentMounts//var/lib/boot2docker 0.17
402 TestISOImage/VersionJSON 0.19
403 TestISOImage/eBPFSupport 0.2
404 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 9.01
405 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.19
406 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
407 TestStartStop/group/embed-certs/serial/Pause 2.7
408 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 7.01
409 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
410 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
411 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.44
412 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
413 TestStartStop/group/newest-cni/serial/SecondStart 32.11
414 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
415 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
416 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
417 TestStartStop/group/newest-cni/serial/Pause 3.11
x
+
TestDownloadOnly/v1.28.0/json-events (6.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-850642 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-850642 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.417687379s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1123 08:10:53.267926   18055 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1123 08:10:53.268001   18055 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-14048/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-850642
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-850642: exit status 85 (67.919334ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-850642 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-850642 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:10:46
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:10:46.901146   18067 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:10:46.901382   18067 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:10:46.901392   18067 out.go:374] Setting ErrFile to fd 2...
	I1123 08:10:46.901397   18067 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:10:46.901652   18067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-14048/.minikube/bin
	W1123 08:10:46.901813   18067 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21969-14048/.minikube/config/config.json: open /home/jenkins/minikube-integration/21969-14048/.minikube/config/config.json: no such file or directory
	I1123 08:10:46.902871   18067 out.go:368] Setting JSON to true
	I1123 08:10:46.903790   18067 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3196,"bootTime":1763882251,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:10:46.903841   18067 start.go:143] virtualization: kvm guest
	I1123 08:10:46.907760   18067 out.go:99] [download-only-850642] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1123 08:10:46.907866   18067 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21969-14048/.minikube/cache/preloaded-tarball: no such file or directory
	I1123 08:10:46.907905   18067 notify.go:221] Checking for updates...
	I1123 08:10:46.909124   18067 out.go:171] MINIKUBE_LOCATION=21969
	I1123 08:10:46.910738   18067 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:10:46.911936   18067 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21969-14048/kubeconfig
	I1123 08:10:46.913113   18067 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-14048/.minikube
	I1123 08:10:46.914424   18067 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1123 08:10:46.916900   18067 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 08:10:46.917133   18067 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:10:47.419822   18067 out.go:99] Using the kvm2 driver based on user configuration
	I1123 08:10:47.419867   18067 start.go:309] selected driver: kvm2
	I1123 08:10:47.419876   18067 start.go:927] validating driver "kvm2" against <nil>
	I1123 08:10:47.420203   18067 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:10:47.420695   18067 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1123 08:10:47.420841   18067 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 08:10:47.420866   18067 cni.go:84] Creating CNI manager for ""
	I1123 08:10:47.420912   18067 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1123 08:10:47.420920   18067 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1123 08:10:47.420956   18067 start.go:353] cluster config:
	{Name:download-only-850642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-850642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:10:47.421111   18067 iso.go:125] acquiring lock: {Name:mk4b6da1d874cbf82d9df128fb5e9a0d9b7ea794 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:10:47.422497   18067 out.go:99] Downloading VM boot image ...
	I1123 08:10:47.422528   18067 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21969-14048/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1123 08:10:50.312230   18067 out.go:99] Starting "download-only-850642" primary control-plane node in "download-only-850642" cluster
	I1123 08:10:50.312278   18067 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 08:10:50.328104   18067 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1123 08:10:50.328142   18067 cache.go:65] Caching tarball of preloaded images
	I1123 08:10:50.328294   18067 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 08:10:50.329903   18067 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1123 08:10:50.329922   18067 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1123 08:10:50.353296   18067 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1123 08:10:50.353397   18067 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21969-14048/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-850642 host does not exist
	  To start a cluster, run: "minikube start -p download-only-850642"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-850642
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-334487 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-334487 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.051087822s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1123 08:10:56.674822   18055 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1123 08:10:56.674868   18055 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-14048/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-334487
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-334487: exit status 85 (69.732809ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-850642 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-850642 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │ 23 Nov 25 08:10 UTC │
	│ delete  │ -p download-only-850642                                                                                                                                                 │ download-only-850642 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │ 23 Nov 25 08:10 UTC │
	│ start   │ -o=json --download-only -p download-only-334487 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-334487 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:10:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:10:53.675310   18266 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:10:53.675569   18266 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:10:53.675579   18266 out.go:374] Setting ErrFile to fd 2...
	I1123 08:10:53.675585   18266 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:10:53.675789   18266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-14048/.minikube/bin
	I1123 08:10:53.676230   18266 out.go:368] Setting JSON to true
	I1123 08:10:53.677004   18266 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3203,"bootTime":1763882251,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:10:53.677055   18266 start.go:143] virtualization: kvm guest
	I1123 08:10:53.678914   18266 out.go:99] [download-only-334487] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:10:53.679051   18266 notify.go:221] Checking for updates...
	I1123 08:10:53.680484   18266 out.go:171] MINIKUBE_LOCATION=21969
	I1123 08:10:53.681681   18266 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:10:53.682797   18266 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21969-14048/kubeconfig
	I1123 08:10:53.683897   18266 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-14048/.minikube
	I1123 08:10:53.685068   18266 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-334487 host does not exist
	  To start a cluster, run: "minikube start -p download-only-334487"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-334487
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1123 08:10:57.305210   18055 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-588509 --alsologtostderr --binary-mirror http://127.0.0.1:36055 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-588509" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-588509
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (50.19s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-563345 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-563345 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (49.494929743s)
helpers_test.go:175: Cleaning up "offline-crio-563345" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-563345
--- PASS: TestOffline (50.19s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-964416
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-964416: exit status 85 (59.539707ms)

                                                
                                                
-- stdout --
	* Profile "addons-964416" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-964416"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-964416
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-964416: exit status 85 (58.960384ms)

                                                
                                                
-- stdout --
	* Profile "addons-964416" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-964416"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (128.75s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-964416 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-964416 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m8.751416273s)
--- PASS: TestAddons/Setup (128.75s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-964416 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-964416 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.57s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-964416 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-964416 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f5d5468e-0e81-49d7-8cef-aec9926db30e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f5d5468e-0e81-49d7-8cef-aec9926db30e] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003843327s
addons_test.go:694: (dbg) Run:  kubectl --context addons-964416 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-964416 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-964416 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.57s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.974461ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-tgrtb" [462f4f44-75d7-422b-bb9c-ceb8be37562e] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008039918s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-sn2cr" [aeb28b9e-fe74-4f9c-99cb-c02c966c626d] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004626474s
addons_test.go:392: (dbg) Run:  kubectl --context addons-964416 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-964416 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-964416 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.735561373s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-964416 ip
2025/11/23 08:13:42 [DEBUG] GET http://192.168.39.198:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-964416 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-964416 addons disable registry --alsologtostderr -v=1: (1.04215289s)
--- PASS: TestAddons/parallel/Registry (16.93s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.7s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.545475ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-964416
addons_test.go:332: (dbg) Run:  kubectl --context addons-964416 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-964416 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.70s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.86s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-9qv6v" [45e1f4c8-31b2-4b08-95a0-ae330ed7cc1e] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005902554s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-964416 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-964416 addons disable inspektor-gadget --alsologtostderr -v=1: (5.858070847s)
--- PASS: TestAddons/parallel/InspektorGadget (10.86s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.99s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 10.20856ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-bbw4l" [ca8af767-0eca-442a-abca-2fdfda492b61] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005352729s
addons_test.go:463: (dbg) Run:  kubectl --context addons-964416 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-964416 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.99s)

                                                
                                    
x
+
TestAddons/parallel/CSI (64.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1123 08:13:26.134263   18055 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1123 08:13:26.143691   18055 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1123 08:13:26.143721   18055 kapi.go:107] duration metric: took 9.478631ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 9.490477ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-964416 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-964416 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [87b74151-a7ad-4723-bdcd-3b666e90b975] Pending
helpers_test.go:352: "task-pv-pod" [87b74151-a7ad-4723-bdcd-3b666e90b975] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [87b74151-a7ad-4723-bdcd-3b666e90b975] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.005044601s
addons_test.go:572: (dbg) Run:  kubectl --context addons-964416 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-964416 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-964416 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-964416 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-964416 delete pod task-pv-pod: (1.31279324s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-964416 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-964416 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-964416 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [1c1aff57-fb30-4fda-ba94-632a5480af71] Pending
helpers_test.go:352: "task-pv-pod-restore" [1c1aff57-fb30-4fda-ba94-632a5480af71] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [1c1aff57-fb30-4fda-ba94-632a5480af71] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005130734s
addons_test.go:614: (dbg) Run:  kubectl --context addons-964416 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-964416 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-964416 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-964416 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-964416 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-964416 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.938920852s)
--- PASS: TestAddons/parallel/CSI (64.19s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-964416 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-tss77" [18435a7c-1145-45d9-8599-d207b6ed2b0d] Pending
helpers_test.go:352: "headlamp-dfcdc64b-tss77" [18435a7c-1145-45d9-8599-d207b6ed2b0d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-tss77" [18435a7c-1145-45d9-8599-d207b6ed2b0d] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004626718s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-964416 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-964416 addons disable headlamp --alsologtostderr -v=1: (6.447086256s)
--- PASS: TestAddons/parallel/Headlamp (19.35s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-tgzh5" [06ff1029-3462-427f-9abb-e62856c104c2] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004748731s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-964416 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.66s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-964416 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-964416 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-964416 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [95566208-927b-4e2d-bb1b-f366b8806081] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [95566208-927b-4e2d-bb1b-f366b8806081] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [95566208-927b-4e2d-bb1b-f366b8806081] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003334166s
addons_test.go:967: (dbg) Run:  kubectl --context addons-964416 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-964416 ssh "cat /opt/local-path-provisioner/pvc-cd89c1fc-4685-472d-9496-2945ce215720_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-964416 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-964416 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-964416 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-964416 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.860359045s)
--- PASS: TestAddons/parallel/LocalPath (53.66s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-n75x9" [8710964c-97c8-402e-9549-f6b1f4591c57] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.012674641s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-964416 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-q5wf2" [078e894c-77ed-4fbe-a222-4c1143b50900] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.009037262s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-964416 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-964416 addons disable yakd --alsologtostderr -v=1: (5.91554787s)
--- PASS: TestAddons/parallel/Yakd (10.93s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (87.88s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-964416
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-964416: (1m27.688243963s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-964416
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-964416
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-964416
--- PASS: TestAddons/StoppedEnableDisable (87.88s)

                                                
                                    
x
+
TestCertOptions (58.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-763715 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-763715 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (56.947664159s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-763715 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-763715 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-763715 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-763715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-763715
--- PASS: TestCertOptions (58.03s)

                                                
                                    
x
+
TestCertExpiration (503.15s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-103506 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-103506 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m2.002944025s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-103506 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-103506 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (4m20.417200536s)
helpers_test.go:175: Cleaning up "cert-expiration-103506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-103506
--- PASS: TestCertExpiration (503.15s)

                                                
                                    
x
+
TestForceSystemdFlag (77.87s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-623009 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-623009 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m16.907241933s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-623009 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-623009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-623009
--- PASS: TestForceSystemdFlag (77.87s)

                                                
                                    
x
+
TestForceSystemdEnv (58.44s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-610299 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-610299 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (57.650083892s)
helpers_test.go:175: Cleaning up "force-systemd-env-610299" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-610299
--- PASS: TestForceSystemdEnv (58.44s)

                                                
                                    
x
+
TestErrorSpam/setup (39.57s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-431222 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-431222 --driver=kvm2  --container-runtime=crio
E1123 08:18:07.474962   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:07.481321   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:07.492629   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:07.513938   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:07.555284   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:07.636665   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:07.798150   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:08.119800   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:08.761759   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:10.043345   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:12.605400   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:17.727451   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:27.969145   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-431222 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-431222 --driver=kvm2  --container-runtime=crio: (39.57050109s)
--- PASS: TestErrorSpam/setup (39.57s)

                                                
                                    
x
+
TestErrorSpam/start (0.31s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431222 --log_dir /tmp/nospam-431222 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431222 --log_dir /tmp/nospam-431222 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431222 --log_dir /tmp/nospam-431222 start --dry-run
--- PASS: TestErrorSpam/start (0.31s)

                                                
                                    
x
+
TestErrorSpam/status (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431222 --log_dir /tmp/nospam-431222 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431222 --log_dir /tmp/nospam-431222 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431222 --log_dir /tmp/nospam-431222 status
--- PASS: TestErrorSpam/status (0.65s)

                                                
                                    
x
+
TestErrorSpam/pause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431222 --log_dir /tmp/nospam-431222 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431222 --log_dir /tmp/nospam-431222 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431222 --log_dir /tmp/nospam-431222 pause
--- PASS: TestErrorSpam/pause (1.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431222 --log_dir /tmp/nospam-431222 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431222 --log_dir /tmp/nospam-431222 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431222 --log_dir /tmp/nospam-431222 unpause
--- PASS: TestErrorSpam/unpause (1.68s)

                                                
                                    
x
+
TestErrorSpam/stop (73.63s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431222 --log_dir /tmp/nospam-431222 stop
E1123 08:18:48.451046   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:19:29.413763   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-431222 --log_dir /tmp/nospam-431222 stop: (1m10.849734788s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431222 --log_dir /tmp/nospam-431222 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-431222 --log_dir /tmp/nospam-431222 stop: (1.43442502s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431222 --log_dir /tmp/nospam-431222 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-431222 --log_dir /tmp/nospam-431222 stop: (1.349252656s)
--- PASS: TestErrorSpam/stop (73.63s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21969-14048/.minikube/files/etc/test/nested/copy/18055/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (88.62s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-427957 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1123 08:20:51.338821   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-427957 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m28.614770837s)
--- PASS: TestFunctional/serial/StartWithProxy (88.62s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (53.61s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1123 08:21:16.826251   18055 config.go:182] Loaded profile config "functional-427957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-427957 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-427957 --alsologtostderr -v=8: (53.610409159s)
functional_test.go:678: soft start took 53.611135516s for "functional-427957" cluster.
I1123 08:22:10.437047   18055 config.go:182] Loaded profile config "functional-427957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (53.61s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-427957 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-427957 cache add registry.k8s.io/pause:3.3: (1.032610546s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-427957 cache add registry.k8s.io/pause:latest: (1.027698771s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-427957 /tmp/TestFunctionalserialCacheCmdcacheadd_local2960006468/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 cache add minikube-local-cache-test:functional-427957
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 cache delete minikube-local-cache-test:functional-427957
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-427957
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427957 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (165.87799ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 kubectl -- --context functional-427957 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-427957 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-427957 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-427957 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.268449617s)
functional_test.go:776: restart took 33.268544788s for "functional-427957" cluster.
I1123 08:22:50.043411   18055 config.go:182] Loaded profile config "functional-427957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (33.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-427957 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-427957 logs: (1.272068147s)
--- PASS: TestFunctional/serial/LogsCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 logs --file /tmp/TestFunctionalserialLogsFileCmd1060060838/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-427957 logs --file /tmp/TestFunctionalserialLogsFileCmd1060060838/001/logs.txt: (1.300294624s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.51s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-427957 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-427957
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-427957: exit status 115 (224.093788ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.243:30523 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-427957 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-427957 delete -f testdata/invalidsvc.yaml: (1.083886593s)
--- PASS: TestFunctional/serial/InvalidService (4.51s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427957 config get cpus: exit status 14 (55.570793ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427957 config get cpus: exit status 14 (67.316753ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-427957 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-427957 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 24825: os: process already finished
E1123 08:23:35.180772   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/DashboardCmd (14.41s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-427957 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-427957 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (100.393004ms)

                                                
                                                
-- stdout --
	* [functional-427957] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-14048/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-14048/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:23:20.087607   24723 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:23:20.087864   24723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:23:20.087874   24723 out.go:374] Setting ErrFile to fd 2...
	I1123 08:23:20.087881   24723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:23:20.088071   24723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-14048/.minikube/bin
	I1123 08:23:20.088517   24723 out.go:368] Setting JSON to false
	I1123 08:23:20.089301   24723 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3949,"bootTime":1763882251,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:23:20.089356   24723 start.go:143] virtualization: kvm guest
	I1123 08:23:20.091120   24723 out.go:179] * [functional-427957] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:23:20.092318   24723 notify.go:221] Checking for updates...
	I1123 08:23:20.092341   24723 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:23:20.093390   24723 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:23:20.094605   24723 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-14048/kubeconfig
	I1123 08:23:20.095673   24723 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-14048/.minikube
	I1123 08:23:20.096745   24723 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:23:20.097793   24723 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:23:20.099394   24723 config.go:182] Loaded profile config "functional-427957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:23:20.100045   24723 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:23:20.130595   24723 out.go:179] * Using the kvm2 driver based on existing profile
	I1123 08:23:20.131555   24723 start.go:309] selected driver: kvm2
	I1123 08:23:20.131564   24723 start.go:927] validating driver "kvm2" against &{Name:functional-427957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-427957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:23:20.131642   24723 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:23:20.133253   24723 out.go:203] 
	W1123 08:23:20.134347   24723 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1123 08:23:20.135238   24723 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-427957 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-427957 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-427957 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (112.238581ms)

                                                
                                                
-- stdout --
	* [functional-427957] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-14048/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-14048/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:23:20.306886   24765 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:23:20.307176   24765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:23:20.307186   24765 out.go:374] Setting ErrFile to fd 2...
	I1123 08:23:20.307193   24765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:23:20.307516   24765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-14048/.minikube/bin
	I1123 08:23:20.307950   24765 out.go:368] Setting JSON to false
	I1123 08:23:20.308762   24765 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3949,"bootTime":1763882251,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:23:20.308814   24765 start.go:143] virtualization: kvm guest
	I1123 08:23:20.310374   24765 out.go:179] * [functional-427957] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1123 08:23:20.311531   24765 notify.go:221] Checking for updates...
	I1123 08:23:20.311546   24765 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:23:20.312680   24765 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:23:20.313924   24765 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-14048/kubeconfig
	I1123 08:23:20.315306   24765 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-14048/.minikube
	I1123 08:23:20.319971   24765 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:23:20.321150   24765 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:23:20.323025   24765 config.go:182] Loaded profile config "functional-427957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:23:20.323791   24765 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:23:20.354802   24765 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1123 08:23:20.355865   24765 start.go:309] selected driver: kvm2
	I1123 08:23:20.355881   24765 start.go:927] validating driver "kvm2" against &{Name:functional-427957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-427957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:23:20.356005   24765 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:23:20.358196   24765 out.go:203] 
	W1123 08:23:20.359873   24765 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1123 08:23:20.361002   24765 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-427957 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-427957 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-lxm4n" [f4199e2c-ba7a-4288-9f6b-c8dc18b2d4fe] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-lxm4n" [f4199e2c-ba7a-4288-9f6b-c8dc18b2d4fe] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003967223s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.243:31009
functional_test.go:1680: http://192.168.39.243:31009: success! body:
Request served by hello-node-connect-7d85dfc575-lxm4n

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.243:31009
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.48s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [c038f37b-b0ee-4b35-b95f-8aaf9aad9e30] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.006348217s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-427957 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-427957 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-427957 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-427957 apply -f testdata/storage-provisioner/pod.yaml
I1123 08:23:05.887718   18055 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [fa03a456-6686-4669-8d38-f7560fa9f3ef] Pending
helpers_test.go:352: "sp-pod" [fa03a456-6686-4669-8d38-f7560fa9f3ef] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1123 08:23:07.470858   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "sp-pod" [fa03a456-6686-4669-8d38-f7560fa9f3ef] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.040225003s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-427957 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-427957 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-427957 apply -f testdata/storage-provisioner/pod.yaml
I1123 08:23:23.161325   18055 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [aa9553a9-e8a0-4ecd-b5c9-350e995d8581] Pending
helpers_test.go:352: "sp-pod" [aa9553a9-e8a0-4ecd-b5c9-350e995d8581] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [aa9553a9-e8a0-4ecd-b5c9-350e995d8581] Running
2025/11/23 08:23:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004651339s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-427957 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.99s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh -n functional-427957 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 cp functional-427957:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd87989117/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh -n functional-427957 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh -n functional-427957 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-427957 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-crtjf" [7efbde74-2ffb-4186-9edb-e022079cbf70] Pending
helpers_test.go:352: "mysql-5bb876957f-crtjf" [7efbde74-2ffb-4186-9edb-e022079cbf70] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-crtjf" [7efbde74-2ffb-4186-9edb-e022079cbf70] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.009397248s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-427957 exec mysql-5bb876957f-crtjf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-427957 exec mysql-5bb876957f-crtjf -- mysql -ppassword -e "show databases;": exit status 1 (221.489964ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1123 08:23:15.276523   18055 retry.go:31] will retry after 1.355877034s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-427957 exec mysql-5bb876957f-crtjf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-427957 exec mysql-5bb876957f-crtjf -- mysql -ppassword -e "show databases;": exit status 1 (139.573468ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1123 08:23:16.772351   18055 retry.go:31] will retry after 2.083065995s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-427957 exec mysql-5bb876957f-crtjf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.20s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/18055/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "sudo cat /etc/test/nested/copy/18055/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/18055.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "sudo cat /etc/ssl/certs/18055.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/18055.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "sudo cat /usr/share/ca-certificates/18055.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/180552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "sudo cat /etc/ssl/certs/180552.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/180552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "sudo cat /usr/share/ca-certificates/180552.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-427957 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427957 ssh "sudo systemctl is-active docker": exit status 1 (188.385171ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427957 ssh "sudo systemctl is-active containerd": exit status 1 (186.099642ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-427957 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-427957
localhost/kicbase/echo-server:functional-427957
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-427957 image ls --format short --alsologtostderr:
I1123 08:23:25.465964   24950 out.go:360] Setting OutFile to fd 1 ...
I1123 08:23:25.466293   24950 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:23:25.466307   24950 out.go:374] Setting ErrFile to fd 2...
I1123 08:23:25.466314   24950 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:23:25.466597   24950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-14048/.minikube/bin
I1123 08:23:25.467358   24950 config.go:182] Loaded profile config "functional-427957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:23:25.467534   24950 config.go:182] Loaded profile config "functional-427957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:23:25.469999   24950 ssh_runner.go:195] Run: systemctl --version
I1123 08:23:25.472439   24950 main.go:143] libmachine: domain functional-427957 has defined MAC address 52:54:00:73:cf:47 in network mk-functional-427957
I1123 08:23:25.472941   24950 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:cf:47", ip: ""} in network mk-functional-427957: {Iface:virbr1 ExpiryTime:2025-11-23 09:20:02 +0000 UTC Type:0 Mac:52:54:00:73:cf:47 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:functional-427957 Clientid:01:52:54:00:73:cf:47}
I1123 08:23:25.472989   24950 main.go:143] libmachine: domain functional-427957 has defined IP address 192.168.39.243 and MAC address 52:54:00:73:cf:47 in network mk-functional-427957
I1123 08:23:25.473132   24950 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/functional-427957/id_rsa Username:docker}
I1123 08:23:25.570044   24950 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-427957 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-427957  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ localhost/minikube-local-cache-test     │ functional-427957  │ f2177d349541f │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-427957 image ls --format table --alsologtostderr:
I1123 08:23:30.742252   25195 out.go:360] Setting OutFile to fd 1 ...
I1123 08:23:30.742490   25195 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:23:30.742501   25195 out.go:374] Setting ErrFile to fd 2...
I1123 08:23:30.742506   25195 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:23:30.742673   25195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-14048/.minikube/bin
I1123 08:23:30.743181   25195 config.go:182] Loaded profile config "functional-427957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:23:30.743267   25195 config.go:182] Loaded profile config "functional-427957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:23:30.745083   25195 ssh_runner.go:195] Run: systemctl --version
I1123 08:23:30.747289   25195 main.go:143] libmachine: domain functional-427957 has defined MAC address 52:54:00:73:cf:47 in network mk-functional-427957
I1123 08:23:30.747666   25195 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:cf:47", ip: ""} in network mk-functional-427957: {Iface:virbr1 ExpiryTime:2025-11-23 09:20:02 +0000 UTC Type:0 Mac:52:54:00:73:cf:47 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:functional-427957 Clientid:01:52:54:00:73:cf:47}
I1123 08:23:30.747690   25195 main.go:143] libmachine: domain functional-427957 has defined IP address 192.168.39.243 and MAC address 52:54:00:73:cf:47 in network mk-functional-427957
I1123 08:23:30.747809   25195 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/functional-427957/id_rsa Username:docker}
I1123 08:23:30.841844   25195 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-427957 image ls --format json --alsologtostderr:
[{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f15
5baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/s
torage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":[],"size":"1462480"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-ser
ver@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-427957"],"size":"4943877"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac
7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"f2177d349541f97e444ddd89ee8d433e267a1cf722ecb40011a41bb04d133b52","repoDigests":["localhost/minikube-local-cache-test@sha256:6ccc3766b2d2e16dcad9ded6ece66f321efc9ea875cfb68e470b8be10233484b"],"repoTags":["localhost/minikube-local-cache-test:functional-427957"],"size":"3330"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["dock
er.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"fc25172553d79197ec
d840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-427957 image ls --format json --alsologtostderr:
I1123 08:23:30.416796   25185 out.go:360] Setting OutFile to fd 1 ...
I1123 08:23:30.417012   25185 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:23:30.417021   25185 out.go:374] Setting ErrFile to fd 2...
I1123 08:23:30.417024   25185 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:23:30.417216   25185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-14048/.minikube/bin
I1123 08:23:30.417728   25185 config.go:182] Loaded profile config "functional-427957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:23:30.417816   25185 config.go:182] Loaded profile config "functional-427957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:23:30.419722   25185 ssh_runner.go:195] Run: systemctl --version
I1123 08:23:30.421726   25185 main.go:143] libmachine: domain functional-427957 has defined MAC address 52:54:00:73:cf:47 in network mk-functional-427957
I1123 08:23:30.422116   25185 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:cf:47", ip: ""} in network mk-functional-427957: {Iface:virbr1 ExpiryTime:2025-11-23 09:20:02 +0000 UTC Type:0 Mac:52:54:00:73:cf:47 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:functional-427957 Clientid:01:52:54:00:73:cf:47}
I1123 08:23:30.422141   25185 main.go:143] libmachine: domain functional-427957 has defined IP address 192.168.39.243 and MAC address 52:54:00:73:cf:47 in network mk-functional-427957
I1123 08:23:30.422280   25185 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/functional-427957/id_rsa Username:docker}
I1123 08:23:30.509254   25185 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-427957 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: f2177d349541f97e444ddd89ee8d433e267a1cf722ecb40011a41bb04d133b52
repoDigests:
- localhost/minikube-local-cache-test@sha256:6ccc3766b2d2e16dcad9ded6ece66f321efc9ea875cfb68e470b8be10233484b
repoTags:
- localhost/minikube-local-cache-test:functional-427957
size: "3330"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-427957
size: "4943877"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-427957 image ls --format yaml --alsologtostderr:
I1123 08:23:25.685056   24960 out.go:360] Setting OutFile to fd 1 ...
I1123 08:23:25.685286   24960 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:23:25.685293   24960 out.go:374] Setting ErrFile to fd 2...
I1123 08:23:25.685297   24960 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:23:25.685475   24960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-14048/.minikube/bin
I1123 08:23:25.686019   24960 config.go:182] Loaded profile config "functional-427957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:23:25.686124   24960 config.go:182] Loaded profile config "functional-427957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:23:25.688299   24960 ssh_runner.go:195] Run: systemctl --version
I1123 08:23:25.690341   24960 main.go:143] libmachine: domain functional-427957 has defined MAC address 52:54:00:73:cf:47 in network mk-functional-427957
I1123 08:23:25.690690   24960 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:cf:47", ip: ""} in network mk-functional-427957: {Iface:virbr1 ExpiryTime:2025-11-23 09:20:02 +0000 UTC Type:0 Mac:52:54:00:73:cf:47 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:functional-427957 Clientid:01:52:54:00:73:cf:47}
I1123 08:23:25.690712   24960 main.go:143] libmachine: domain functional-427957 has defined IP address 192.168.39.243 and MAC address 52:54:00:73:cf:47 in network mk-functional-427957
I1123 08:23:25.690831   24960 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/functional-427957/id_rsa Username:docker}
I1123 08:23:25.770218   24960 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427957 ssh pgrep buildkitd: exit status 1 (178.972671ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 image build -t localhost/my-image:functional-427957 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-427957 image build -t localhost/my-image:functional-427957 testdata/build --alsologtostderr: (5.792375047s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-427957 image build -t localhost/my-image:functional-427957 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8cbc3a04ca5
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-427957
--> 51e22233113
Successfully tagged localhost/my-image:functional-427957
51e22233113e8e814d0ff41fdeefae1709a2e13666e1a30980c7245f57ca1574
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-427957 image build -t localhost/my-image:functional-427957 testdata/build --alsologtostderr:
I1123 08:23:26.052087   24981 out.go:360] Setting OutFile to fd 1 ...
I1123 08:23:26.052380   24981 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:23:26.052390   24981 out.go:374] Setting ErrFile to fd 2...
I1123 08:23:26.052394   24981 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:23:26.052569   24981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-14048/.minikube/bin
I1123 08:23:26.053098   24981 config.go:182] Loaded profile config "functional-427957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:23:26.053707   24981 config.go:182] Loaded profile config "functional-427957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:23:26.055898   24981 ssh_runner.go:195] Run: systemctl --version
I1123 08:23:26.058319   24981 main.go:143] libmachine: domain functional-427957 has defined MAC address 52:54:00:73:cf:47 in network mk-functional-427957
I1123 08:23:26.058778   24981 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:cf:47", ip: ""} in network mk-functional-427957: {Iface:virbr1 ExpiryTime:2025-11-23 09:20:02 +0000 UTC Type:0 Mac:52:54:00:73:cf:47 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:functional-427957 Clientid:01:52:54:00:73:cf:47}
I1123 08:23:26.058804   24981 main.go:143] libmachine: domain functional-427957 has defined IP address 192.168.39.243 and MAC address 52:54:00:73:cf:47 in network mk-functional-427957
I1123 08:23:26.058957   24981 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/functional-427957/id_rsa Username:docker}
I1123 08:23:26.180936   24981 build_images.go:162] Building image from path: /tmp/build.1772747823.tar
I1123 08:23:26.180994   24981 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1123 08:23:26.204938   24981 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1772747823.tar
I1123 08:23:26.213009   24981 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1772747823.tar: stat -c "%s %y" /var/lib/minikube/build/build.1772747823.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1772747823.tar': No such file or directory
I1123 08:23:26.213039   24981 ssh_runner.go:362] scp /tmp/build.1772747823.tar --> /var/lib/minikube/build/build.1772747823.tar (3072 bytes)
I1123 08:23:26.282964   24981 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1772747823
I1123 08:23:26.323207   24981 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1772747823 -xf /var/lib/minikube/build/build.1772747823.tar
I1123 08:23:26.341514   24981 crio.go:315] Building image: /var/lib/minikube/build/build.1772747823
I1123 08:23:26.341599   24981 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-427957 /var/lib/minikube/build/build.1772747823 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1123 08:23:31.755954   24981 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-427957 /var/lib/minikube/build/build.1772747823 --cgroup-manager=cgroupfs: (5.414307231s)
I1123 08:23:31.756027   24981 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1772747823
I1123 08:23:31.771989   24981 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1772747823.tar
I1123 08:23:31.785572   24981 build_images.go:218] Built localhost/my-image:functional-427957 from /tmp/build.1772747823.tar
I1123 08:23:31.785600   24981 build_images.go:134] succeeded building to: functional-427957
I1123 08:23:31.785604   24981 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-427957
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 image load --daemon kicbase/echo-server:functional-427957 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-427957 image load --daemon kicbase/echo-server:functional-427957 --alsologtostderr: (1.697222187s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (19.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-427957 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-427957 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-v67mw" [2f9f87fb-79c3-4f32-85c5-882ef15d69be] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-v67mw" [2f9f87fb-79c3-4f32-85c5-882ef15d69be] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 19.003780593s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (19.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 image load --daemon kicbase/echo-server:functional-427957 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-427957
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 image load --daemon kicbase/echo-server:functional-427957 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-427957 image load --daemon kicbase/echo-server:functional-427957 --alsologtostderr: (7.931339781s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 image save kicbase/echo-server:functional-427957 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 image rm kicbase/echo-server:functional-427957 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-427957 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.01309255s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-427957
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 image save --daemon kicbase/echo-server:functional-427957 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-linux-amd64 -p functional-427957 image save --daemon kicbase/echo-server:functional-427957 --alsologtostderr: (3.399479939s)
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-427957
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 service list -o json
functional_test.go:1504: Took "386.267112ms" to run "out/minikube-linux-amd64 -p functional-427957 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.243:30823
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.243:30823
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "238.985923ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "68.47455ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "281.727131ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "57.183784ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-427957 /tmp/TestFunctionalparallelMountCmdany-port1722865197/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763886199724699928" to /tmp/TestFunctionalparallelMountCmdany-port1722865197/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763886199724699928" to /tmp/TestFunctionalparallelMountCmdany-port1722865197/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763886199724699928" to /tmp/TestFunctionalparallelMountCmdany-port1722865197/001/test-1763886199724699928
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427957 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (162.922629ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:23:19.887974   18055 retry.go:31] will retry after 282.850601ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 23 08:23 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 23 08:23 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 23 08:23 test-1763886199724699928
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh cat /mount-9p/test-1763886199724699928
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-427957 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [eeb695a8-07d4-4758-bc53-7af2d2c4eb8b] Pending
helpers_test.go:352: "busybox-mount" [eeb695a8-07d4-4758-bc53-7af2d2c4eb8b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [eeb695a8-07d4-4758-bc53-7af2d2c4eb8b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [eeb695a8-07d4-4758-bc53-7af2d2c4eb8b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.009383369s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-427957 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-427957 /tmp/TestFunctionalparallelMountCmdany-port1722865197/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-427957 /tmp/TestFunctionalparallelMountCmdspecific-port3661287046/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427957 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (215.661818ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:23:27.802887   18055 retry.go:31] will retry after 546.899791ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-427957 /tmp/TestFunctionalparallelMountCmdspecific-port3661287046/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427957 ssh "sudo umount -f /mount-9p": exit status 1 (174.101846ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-427957 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-427957 /tmp/TestFunctionalparallelMountCmdspecific-port3661287046/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-427957 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2904133286/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-427957 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2904133286/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-427957 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2904133286/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427957 ssh "findmnt -T" /mount1: exit status 1 (236.014208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:23:29.336284   18055 retry.go:31] will retry after 270.035228ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-427957 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-427957 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-427957 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2904133286/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-427957 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2904133286/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-427957 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2904133286/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-427957
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-427957
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-427957
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (192.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-099603 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m11.862733285s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (192.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-099603 kubectl -- rollout status deployment/busybox: (4.022263467s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 kubectl -- exec busybox-7b57f96db7-fgg4q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 kubectl -- exec busybox-7b57f96db7-nlhgv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 kubectl -- exec busybox-7b57f96db7-znxvg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 kubectl -- exec busybox-7b57f96db7-fgg4q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 kubectl -- exec busybox-7b57f96db7-nlhgv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 kubectl -- exec busybox-7b57f96db7-znxvg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 kubectl -- exec busybox-7b57f96db7-fgg4q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 kubectl -- exec busybox-7b57f96db7-nlhgv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 kubectl -- exec busybox-7b57f96db7-znxvg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 kubectl -- exec busybox-7b57f96db7-fgg4q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 kubectl -- exec busybox-7b57f96db7-fgg4q -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 kubectl -- exec busybox-7b57f96db7-nlhgv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 kubectl -- exec busybox-7b57f96db7-nlhgv -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 kubectl -- exec busybox-7b57f96db7-znxvg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 kubectl -- exec busybox-7b57f96db7-znxvg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (43.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-099603 node add --alsologtostderr -v 5: (42.947242374s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (43.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-099603 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 cp testdata/cp-test.txt ha-099603:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 cp ha-099603:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1650273936/001/cp-test_ha-099603.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 cp ha-099603:/home/docker/cp-test.txt ha-099603-m02:/home/docker/cp-test_ha-099603_ha-099603-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m02 "sudo cat /home/docker/cp-test_ha-099603_ha-099603-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 cp ha-099603:/home/docker/cp-test.txt ha-099603-m03:/home/docker/cp-test_ha-099603_ha-099603-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m03 "sudo cat /home/docker/cp-test_ha-099603_ha-099603-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 cp ha-099603:/home/docker/cp-test.txt ha-099603-m04:/home/docker/cp-test_ha-099603_ha-099603-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m04 "sudo cat /home/docker/cp-test_ha-099603_ha-099603-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 cp testdata/cp-test.txt ha-099603-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 cp ha-099603-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1650273936/001/cp-test_ha-099603-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 cp ha-099603-m02:/home/docker/cp-test.txt ha-099603:/home/docker/cp-test_ha-099603-m02_ha-099603.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603 "sudo cat /home/docker/cp-test_ha-099603-m02_ha-099603.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 cp ha-099603-m02:/home/docker/cp-test.txt ha-099603-m03:/home/docker/cp-test_ha-099603-m02_ha-099603-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m03 "sudo cat /home/docker/cp-test_ha-099603-m02_ha-099603-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 cp ha-099603-m02:/home/docker/cp-test.txt ha-099603-m04:/home/docker/cp-test_ha-099603-m02_ha-099603-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m04 "sudo cat /home/docker/cp-test_ha-099603-m02_ha-099603-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 cp testdata/cp-test.txt ha-099603-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 cp ha-099603-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1650273936/001/cp-test_ha-099603-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 cp ha-099603-m03:/home/docker/cp-test.txt ha-099603:/home/docker/cp-test_ha-099603-m03_ha-099603.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603 "sudo cat /home/docker/cp-test_ha-099603-m03_ha-099603.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 cp ha-099603-m03:/home/docker/cp-test.txt ha-099603-m02:/home/docker/cp-test_ha-099603-m03_ha-099603-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m02 "sudo cat /home/docker/cp-test_ha-099603-m03_ha-099603-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 cp ha-099603-m03:/home/docker/cp-test.txt ha-099603-m04:/home/docker/cp-test_ha-099603-m03_ha-099603-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m04 "sudo cat /home/docker/cp-test_ha-099603-m03_ha-099603-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 cp testdata/cp-test.txt ha-099603-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 cp ha-099603-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1650273936/001/cp-test_ha-099603-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 cp ha-099603-m04:/home/docker/cp-test.txt ha-099603:/home/docker/cp-test_ha-099603-m04_ha-099603.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603 "sudo cat /home/docker/cp-test_ha-099603-m04_ha-099603.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 cp ha-099603-m04:/home/docker/cp-test.txt ha-099603-m02:/home/docker/cp-test_ha-099603-m04_ha-099603-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m02 "sudo cat /home/docker/cp-test_ha-099603-m04_ha-099603-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 cp ha-099603-m04:/home/docker/cp-test.txt ha-099603-m03:/home/docker/cp-test_ha-099603-m04_ha-099603-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 ssh -n ha-099603-m03 "sudo cat /home/docker/cp-test_ha-099603-m04_ha-099603-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (89.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 node stop m02 --alsologtostderr -v 5
E1123 08:27:58.045815   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:27:58.052210   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:27:58.063596   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:27:58.084941   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:27:58.126340   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:27:58.208653   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:27:58.370178   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:27:58.691866   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:27:59.334024   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:28:00.616212   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:28:03.178324   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:28:07.471724   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:28:08.300172   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:28:18.541912   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:28:39.023441   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:29:19.986063   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-099603 node stop m02 --alsologtostderr -v 5: (1m28.502051123s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-099603 status --alsologtostderr -v 5: exit status 7 (506.036696ms)

                                                
                                                
-- stdout --
	ha-099603
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-099603-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-099603-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-099603-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:29:22.658633   28065 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:29:22.659074   28065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:29:22.659085   28065 out.go:374] Setting ErrFile to fd 2...
	I1123 08:29:22.659092   28065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:29:22.659281   28065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-14048/.minikube/bin
	I1123 08:29:22.659441   28065 out.go:368] Setting JSON to false
	I1123 08:29:22.659482   28065 mustload.go:66] Loading cluster: ha-099603
	I1123 08:29:22.659578   28065 notify.go:221] Checking for updates...
	I1123 08:29:22.659868   28065 config.go:182] Loaded profile config "ha-099603": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:29:22.659889   28065 status.go:174] checking status of ha-099603 ...
	I1123 08:29:22.662009   28065 status.go:371] ha-099603 host status = "Running" (err=<nil>)
	I1123 08:29:22.662030   28065 host.go:66] Checking if "ha-099603" exists ...
	I1123 08:29:22.665029   28065 main.go:143] libmachine: domain ha-099603 has defined MAC address 52:54:00:9e:8c:c1 in network mk-ha-099603
	I1123 08:29:22.665445   28065 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:8c:c1", ip: ""} in network mk-ha-099603: {Iface:virbr1 ExpiryTime:2025-11-23 09:23:53 +0000 UTC Type:0 Mac:52:54:00:9e:8c:c1 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-099603 Clientid:01:52:54:00:9e:8c:c1}
	I1123 08:29:22.665490   28065 main.go:143] libmachine: domain ha-099603 has defined IP address 192.168.39.175 and MAC address 52:54:00:9e:8c:c1 in network mk-ha-099603
	I1123 08:29:22.665617   28065 host.go:66] Checking if "ha-099603" exists ...
	I1123 08:29:22.665893   28065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:29:22.668624   28065 main.go:143] libmachine: domain ha-099603 has defined MAC address 52:54:00:9e:8c:c1 in network mk-ha-099603
	I1123 08:29:22.669064   28065 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:8c:c1", ip: ""} in network mk-ha-099603: {Iface:virbr1 ExpiryTime:2025-11-23 09:23:53 +0000 UTC Type:0 Mac:52:54:00:9e:8c:c1 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-099603 Clientid:01:52:54:00:9e:8c:c1}
	I1123 08:29:22.669099   28065 main.go:143] libmachine: domain ha-099603 has defined IP address 192.168.39.175 and MAC address 52:54:00:9e:8c:c1 in network mk-ha-099603
	I1123 08:29:22.669259   28065 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/ha-099603/id_rsa Username:docker}
	I1123 08:29:22.753794   28065 ssh_runner.go:195] Run: systemctl --version
	I1123 08:29:22.761040   28065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:29:22.781486   28065 kubeconfig.go:125] found "ha-099603" server: "https://192.168.39.254:8443"
	I1123 08:29:22.781518   28065 api_server.go:166] Checking apiserver status ...
	I1123 08:29:22.781549   28065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:29:22.804649   28065 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1426/cgroup
	W1123 08:29:22.820239   28065 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1426/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:29:22.820300   28065 ssh_runner.go:195] Run: ls
	I1123 08:29:22.826746   28065 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1123 08:29:22.831504   28065 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1123 08:29:22.831525   28065 status.go:463] ha-099603 apiserver status = Running (err=<nil>)
	I1123 08:29:22.831536   28065 status.go:176] ha-099603 status: &{Name:ha-099603 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:29:22.831554   28065 status.go:174] checking status of ha-099603-m02 ...
	I1123 08:29:22.833344   28065 status.go:371] ha-099603-m02 host status = "Stopped" (err=<nil>)
	I1123 08:29:22.833361   28065 status.go:384] host is not running, skipping remaining checks
	I1123 08:29:22.833368   28065 status.go:176] ha-099603-m02 status: &{Name:ha-099603-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:29:22.833385   28065 status.go:174] checking status of ha-099603-m03 ...
	I1123 08:29:22.834538   28065 status.go:371] ha-099603-m03 host status = "Running" (err=<nil>)
	I1123 08:29:22.834563   28065 host.go:66] Checking if "ha-099603-m03" exists ...
	I1123 08:29:22.836932   28065 main.go:143] libmachine: domain ha-099603-m03 has defined MAC address 52:54:00:fa:a2:3b in network mk-ha-099603
	I1123 08:29:22.837344   28065 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fa:a2:3b", ip: ""} in network mk-ha-099603: {Iface:virbr1 ExpiryTime:2025-11-23 09:25:51 +0000 UTC Type:0 Mac:52:54:00:fa:a2:3b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-099603-m03 Clientid:01:52:54:00:fa:a2:3b}
	I1123 08:29:22.837369   28065 main.go:143] libmachine: domain ha-099603-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:fa:a2:3b in network mk-ha-099603
	I1123 08:29:22.837511   28065 host.go:66] Checking if "ha-099603-m03" exists ...
	I1123 08:29:22.837700   28065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:29:22.839742   28065 main.go:143] libmachine: domain ha-099603-m03 has defined MAC address 52:54:00:fa:a2:3b in network mk-ha-099603
	I1123 08:29:22.840149   28065 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fa:a2:3b", ip: ""} in network mk-ha-099603: {Iface:virbr1 ExpiryTime:2025-11-23 09:25:51 +0000 UTC Type:0 Mac:52:54:00:fa:a2:3b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-099603-m03 Clientid:01:52:54:00:fa:a2:3b}
	I1123 08:29:22.840177   28065 main.go:143] libmachine: domain ha-099603-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:fa:a2:3b in network mk-ha-099603
	I1123 08:29:22.840331   28065 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/ha-099603-m03/id_rsa Username:docker}
	I1123 08:29:22.925800   28065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:29:22.946091   28065 kubeconfig.go:125] found "ha-099603" server: "https://192.168.39.254:8443"
	I1123 08:29:22.946115   28065 api_server.go:166] Checking apiserver status ...
	I1123 08:29:22.946145   28065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:29:22.968015   28065 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1716/cgroup
	W1123 08:29:22.980134   28065 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1716/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:29:22.980185   28065 ssh_runner.go:195] Run: ls
	I1123 08:29:22.985713   28065 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1123 08:29:22.990636   28065 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1123 08:29:22.990657   28065 status.go:463] ha-099603-m03 apiserver status = Running (err=<nil>)
	I1123 08:29:22.990667   28065 status.go:176] ha-099603-m03 status: &{Name:ha-099603-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:29:22.990690   28065 status.go:174] checking status of ha-099603-m04 ...
	I1123 08:29:22.992288   28065 status.go:371] ha-099603-m04 host status = "Running" (err=<nil>)
	I1123 08:29:22.992306   28065 host.go:66] Checking if "ha-099603-m04" exists ...
	I1123 08:29:22.994928   28065 main.go:143] libmachine: domain ha-099603-m04 has defined MAC address 52:54:00:75:0a:c6 in network mk-ha-099603
	I1123 08:29:22.995344   28065 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:0a:c6", ip: ""} in network mk-ha-099603: {Iface:virbr1 ExpiryTime:2025-11-23 09:27:14 +0000 UTC Type:0 Mac:52:54:00:75:0a:c6 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-099603-m04 Clientid:01:52:54:00:75:0a:c6}
	I1123 08:29:22.995369   28065 main.go:143] libmachine: domain ha-099603-m04 has defined IP address 192.168.39.53 and MAC address 52:54:00:75:0a:c6 in network mk-ha-099603
	I1123 08:29:22.995521   28065 host.go:66] Checking if "ha-099603-m04" exists ...
	I1123 08:29:22.995756   28065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:29:22.998003   28065 main.go:143] libmachine: domain ha-099603-m04 has defined MAC address 52:54:00:75:0a:c6 in network mk-ha-099603
	I1123 08:29:22.998391   28065 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:0a:c6", ip: ""} in network mk-ha-099603: {Iface:virbr1 ExpiryTime:2025-11-23 09:27:14 +0000 UTC Type:0 Mac:52:54:00:75:0a:c6 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-099603-m04 Clientid:01:52:54:00:75:0a:c6}
	I1123 08:29:22.998415   28065 main.go:143] libmachine: domain ha-099603-m04 has defined IP address 192.168.39.53 and MAC address 52:54:00:75:0a:c6 in network mk-ha-099603
	I1123 08:29:22.998599   28065 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/ha-099603-m04/id_rsa Username:docker}
	I1123 08:29:23.086790   28065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:29:23.106886   28065 status.go:176] ha-099603-m04 status: &{Name:ha-099603-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (89.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (35.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-099603 node start m02 --alsologtostderr -v 5: (34.285733887s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (35.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (374.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 stop --alsologtostderr -v 5
E1123 08:30:41.910649   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:32:58.045630   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:33:07.471087   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:33:25.752356   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-099603 stop --alsologtostderr -v 5: (4m12.170127349s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 start --wait true --alsologtostderr -v 5
E1123 08:34:30.542601   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-099603 start --wait true --alsologtostderr -v 5: (2m2.435594805s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (374.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-099603 node delete m03 --alsologtostderr -v 5: (17.233198624s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (244.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 stop --alsologtostderr -v 5
E1123 08:37:58.045933   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:38:07.470494   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-099603 stop --alsologtostderr -v 5: (4m4.092888461s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-099603 status --alsologtostderr -v 5: exit status 7 (60.73243ms)

                                                
                                                
-- stdout --
	ha-099603
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-099603-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-099603-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:40:36.879183   31253 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:40:36.879286   31253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:40:36.879297   31253 out.go:374] Setting ErrFile to fd 2...
	I1123 08:40:36.879304   31253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:40:36.879530   31253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-14048/.minikube/bin
	I1123 08:40:36.879686   31253 out.go:368] Setting JSON to false
	I1123 08:40:36.879713   31253 mustload.go:66] Loading cluster: ha-099603
	I1123 08:40:36.879871   31253 notify.go:221] Checking for updates...
	I1123 08:40:36.880188   31253 config.go:182] Loaded profile config "ha-099603": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:40:36.880213   31253 status.go:174] checking status of ha-099603 ...
	I1123 08:40:36.882385   31253 status.go:371] ha-099603 host status = "Stopped" (err=<nil>)
	I1123 08:40:36.882399   31253 status.go:384] host is not running, skipping remaining checks
	I1123 08:40:36.882403   31253 status.go:176] ha-099603 status: &{Name:ha-099603 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:40:36.882424   31253 status.go:174] checking status of ha-099603-m02 ...
	I1123 08:40:36.883732   31253 status.go:371] ha-099603-m02 host status = "Stopped" (err=<nil>)
	I1123 08:40:36.883748   31253 status.go:384] host is not running, skipping remaining checks
	I1123 08:40:36.883753   31253 status.go:176] ha-099603-m02 status: &{Name:ha-099603-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:40:36.883768   31253 status.go:174] checking status of ha-099603-m04 ...
	I1123 08:40:36.885103   31253 status.go:371] ha-099603-m04 host status = "Stopped" (err=<nil>)
	I1123 08:40:36.885118   31253 status.go:384] host is not running, skipping remaining checks
	I1123 08:40:36.885123   31253 status.go:176] ha-099603-m04 status: &{Name:ha-099603-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (244.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (101.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-099603 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m40.491740435s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (101.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 node add --control-plane --alsologtostderr -v 5
E1123 08:42:58.046228   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:43:07.471088   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-099603 node add --control-plane --alsologtostderr -v 5: (1m12.005739986s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-099603 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.52s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-735460 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1123 08:44:21.116485   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-735460 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m17.518029077s)
--- PASS: TestJSONOutput/start/Command (77.52s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-735460 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-735460 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-735460 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-735460 --output=json --user=testUser: (6.841844864s)
--- PASS: TestJSONOutput/stop/Command (6.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-781593 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-781593 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (73.8933ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"61e233c5-22ca-4bd2-902e-076c8a53155d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-781593] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"01f8971f-19ce-47a3-b425-0054a0a9f1cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21969"}}
	{"specversion":"1.0","id":"25d871a0-de50-4d51-afa8-09bcf908347f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"245d6d8c-fefb-4093-9ce4-c1c5ae793485","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21969-14048/kubeconfig"}}
	{"specversion":"1.0","id":"e3bf05d0-2931-40de-9b89-1d6832e51380","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-14048/.minikube"}}
	{"specversion":"1.0","id":"9ef0f883-138c-455a-befa-ca8db9952793","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e178b168-c5e0-4666-ba7d-eddd8634c3e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9c8b8f3d-bb94-4a09-9fa7-4792aceadc36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-781593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-781593
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (78.14s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-309520 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-309520 --driver=kvm2  --container-runtime=crio: (37.510308501s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-312596 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-312596 --driver=kvm2  --container-runtime=crio: (38.380242794s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-309520
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-312596
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-312596" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-312596
helpers_test.go:175: Cleaning up "first-309520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-309520
--- PASS: TestMinikubeProfile (78.14s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (19.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-066210 --memory=3072 --mount-string /tmp/TestMountStartserial2046110439/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-066210 --memory=3072 --mount-string /tmp/TestMountStartserial2046110439/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.278622075s)
--- PASS: TestMountStart/serial/StartWithMountFirst (19.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-066210 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-066210 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (21.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-082058 --memory=3072 --mount-string /tmp/TestMountStartserial2046110439/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-082058 --memory=3072 --mount-string /tmp/TestMountStartserial2046110439/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (20.934959102s)
--- PASS: TestMountStart/serial/StartWithMountSecond (21.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-082058 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-082058 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.53s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-066210 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-082058 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-082058 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-082058
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-082058: (1.215394363s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (16.95s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-082058
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-082058: (15.951788098s)
--- PASS: TestMountStart/serial/RestartStopped (16.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-082058 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-082058 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (97.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-901565 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1123 08:47:58.045512   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:48:07.471519   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-901565 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m36.694302136s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (97.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901565 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901565 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-901565 -- rollout status deployment/busybox: (3.498963647s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901565 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901565 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901565 -- exec busybox-7b57f96db7-d8ksv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901565 -- exec busybox-7b57f96db7-jwzdp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901565 -- exec busybox-7b57f96db7-d8ksv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901565 -- exec busybox-7b57f96db7-jwzdp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901565 -- exec busybox-7b57f96db7-d8ksv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901565 -- exec busybox-7b57f96db7-jwzdp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.02s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901565 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901565 -- exec busybox-7b57f96db7-d8ksv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901565 -- exec busybox-7b57f96db7-d8ksv -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901565 -- exec busybox-7b57f96db7-jwzdp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901565 -- exec busybox-7b57f96db7-jwzdp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-901565 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-901565 -v=5 --alsologtostderr: (40.561874768s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.01s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-901565 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 cp testdata/cp-test.txt multinode-901565:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 ssh -n multinode-901565 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 cp multinode-901565:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile133649816/001/cp-test_multinode-901565.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 ssh -n multinode-901565 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 cp multinode-901565:/home/docker/cp-test.txt multinode-901565-m02:/home/docker/cp-test_multinode-901565_multinode-901565-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 ssh -n multinode-901565 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 ssh -n multinode-901565-m02 "sudo cat /home/docker/cp-test_multinode-901565_multinode-901565-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 cp multinode-901565:/home/docker/cp-test.txt multinode-901565-m03:/home/docker/cp-test_multinode-901565_multinode-901565-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 ssh -n multinode-901565 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 ssh -n multinode-901565-m03 "sudo cat /home/docker/cp-test_multinode-901565_multinode-901565-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 cp testdata/cp-test.txt multinode-901565-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 ssh -n multinode-901565-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 cp multinode-901565-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile133649816/001/cp-test_multinode-901565-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 ssh -n multinode-901565-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 cp multinode-901565-m02:/home/docker/cp-test.txt multinode-901565:/home/docker/cp-test_multinode-901565-m02_multinode-901565.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 ssh -n multinode-901565-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 ssh -n multinode-901565 "sudo cat /home/docker/cp-test_multinode-901565-m02_multinode-901565.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 cp multinode-901565-m02:/home/docker/cp-test.txt multinode-901565-m03:/home/docker/cp-test_multinode-901565-m02_multinode-901565-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 ssh -n multinode-901565-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 ssh -n multinode-901565-m03 "sudo cat /home/docker/cp-test_multinode-901565-m02_multinode-901565-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 cp testdata/cp-test.txt multinode-901565-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 ssh -n multinode-901565-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 cp multinode-901565-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile133649816/001/cp-test_multinode-901565-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 ssh -n multinode-901565-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 cp multinode-901565-m03:/home/docker/cp-test.txt multinode-901565:/home/docker/cp-test_multinode-901565-m03_multinode-901565.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 ssh -n multinode-901565-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 ssh -n multinode-901565 "sudo cat /home/docker/cp-test_multinode-901565-m03_multinode-901565.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 cp multinode-901565-m03:/home/docker/cp-test.txt multinode-901565-m02:/home/docker/cp-test_multinode-901565-m03_multinode-901565-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 ssh -n multinode-901565-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 ssh -n multinode-901565-m02 "sudo cat /home/docker/cp-test_multinode-901565-m03_multinode-901565-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-901565 node stop m03: (1.770539407s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-901565 status: exit status 7 (323.779788ms)

                                                
                                                
-- stdout --
	multinode-901565
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-901565-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-901565-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-901565 status --alsologtostderr: exit status 7 (322.643208ms)

                                                
                                                
-- stdout --
	multinode-901565
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-901565-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-901565-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:49:53.234481   36506 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:49:53.234592   36506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:49:53.234603   36506 out.go:374] Setting ErrFile to fd 2...
	I1123 08:49:53.234609   36506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:49:53.234774   36506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-14048/.minikube/bin
	I1123 08:49:53.234930   36506 out.go:368] Setting JSON to false
	I1123 08:49:53.234955   36506 mustload.go:66] Loading cluster: multinode-901565
	I1123 08:49:53.235084   36506 notify.go:221] Checking for updates...
	I1123 08:49:53.235271   36506 config.go:182] Loaded profile config "multinode-901565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:49:53.235286   36506 status.go:174] checking status of multinode-901565 ...
	I1123 08:49:53.237128   36506 status.go:371] multinode-901565 host status = "Running" (err=<nil>)
	I1123 08:49:53.237150   36506 host.go:66] Checking if "multinode-901565" exists ...
	I1123 08:49:53.239508   36506 main.go:143] libmachine: domain multinode-901565 has defined MAC address 52:54:00:c5:8c:a6 in network mk-multinode-901565
	I1123 08:49:53.239895   36506 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c5:8c:a6", ip: ""} in network mk-multinode-901565: {Iface:virbr1 ExpiryTime:2025-11-23 09:47:35 +0000 UTC Type:0 Mac:52:54:00:c5:8c:a6 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-901565 Clientid:01:52:54:00:c5:8c:a6}
	I1123 08:49:53.239925   36506 main.go:143] libmachine: domain multinode-901565 has defined IP address 192.168.39.211 and MAC address 52:54:00:c5:8c:a6 in network mk-multinode-901565
	I1123 08:49:53.240070   36506 host.go:66] Checking if "multinode-901565" exists ...
	I1123 08:49:53.240242   36506 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:49:53.242152   36506 main.go:143] libmachine: domain multinode-901565 has defined MAC address 52:54:00:c5:8c:a6 in network mk-multinode-901565
	I1123 08:49:53.242502   36506 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c5:8c:a6", ip: ""} in network mk-multinode-901565: {Iface:virbr1 ExpiryTime:2025-11-23 09:47:35 +0000 UTC Type:0 Mac:52:54:00:c5:8c:a6 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-901565 Clientid:01:52:54:00:c5:8c:a6}
	I1123 08:49:53.242523   36506 main.go:143] libmachine: domain multinode-901565 has defined IP address 192.168.39.211 and MAC address 52:54:00:c5:8c:a6 in network mk-multinode-901565
	I1123 08:49:53.242670   36506 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/multinode-901565/id_rsa Username:docker}
	I1123 08:49:53.323666   36506 ssh_runner.go:195] Run: systemctl --version
	I1123 08:49:53.330920   36506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:49:53.348862   36506 kubeconfig.go:125] found "multinode-901565" server: "https://192.168.39.211:8443"
	I1123 08:49:53.348888   36506 api_server.go:166] Checking apiserver status ...
	I1123 08:49:53.348922   36506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:49:53.367450   36506 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1335/cgroup
	W1123 08:49:53.379541   36506 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1335/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:49:53.379601   36506 ssh_runner.go:195] Run: ls
	I1123 08:49:53.385524   36506 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1123 08:49:53.391156   36506 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I1123 08:49:53.391177   36506 status.go:463] multinode-901565 apiserver status = Running (err=<nil>)
	I1123 08:49:53.391188   36506 status.go:176] multinode-901565 status: &{Name:multinode-901565 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:49:53.391214   36506 status.go:174] checking status of multinode-901565-m02 ...
	I1123 08:49:53.392789   36506 status.go:371] multinode-901565-m02 host status = "Running" (err=<nil>)
	I1123 08:49:53.392808   36506 host.go:66] Checking if "multinode-901565-m02" exists ...
	I1123 08:49:53.395698   36506 main.go:143] libmachine: domain multinode-901565-m02 has defined MAC address 52:54:00:57:95:77 in network mk-multinode-901565
	I1123 08:49:53.396144   36506 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:57:95:77", ip: ""} in network mk-multinode-901565: {Iface:virbr1 ExpiryTime:2025-11-23 09:48:29 +0000 UTC Type:0 Mac:52:54:00:57:95:77 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-901565-m02 Clientid:01:52:54:00:57:95:77}
	I1123 08:49:53.396173   36506 main.go:143] libmachine: domain multinode-901565-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:57:95:77 in network mk-multinode-901565
	I1123 08:49:53.396358   36506 host.go:66] Checking if "multinode-901565-m02" exists ...
	I1123 08:49:53.396596   36506 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:49:53.398420   36506 main.go:143] libmachine: domain multinode-901565-m02 has defined MAC address 52:54:00:57:95:77 in network mk-multinode-901565
	I1123 08:49:53.398764   36506 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:57:95:77", ip: ""} in network mk-multinode-901565: {Iface:virbr1 ExpiryTime:2025-11-23 09:48:29 +0000 UTC Type:0 Mac:52:54:00:57:95:77 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-901565-m02 Clientid:01:52:54:00:57:95:77}
	I1123 08:49:53.398788   36506 main.go:143] libmachine: domain multinode-901565-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:57:95:77 in network mk-multinode-901565
	I1123 08:49:53.398929   36506 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/multinode-901565-m02/id_rsa Username:docker}
	I1123 08:49:53.483736   36506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:49:53.500254   36506 status.go:176] multinode-901565-m02 status: &{Name:multinode-901565-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:49:53.500315   36506 status.go:174] checking status of multinode-901565-m03 ...
	I1123 08:49:53.502056   36506 status.go:371] multinode-901565-m03 host status = "Stopped" (err=<nil>)
	I1123 08:49:53.502075   36506 status.go:384] host is not running, skipping remaining checks
	I1123 08:49:53.502082   36506 status.go:176] multinode-901565-m03 status: &{Name:multinode-901565-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-901565 node start m03 -v=5 --alsologtostderr: (37.796359275s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (306.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-901565
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-901565
E1123 08:51:10.546259   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:52:58.045552   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:53:07.479379   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-901565: (2m53.772033858s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-901565 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-901565 --wait=true -v=5 --alsologtostderr: (2m12.596602319s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-901565
--- PASS: TestMultiNode/serial/RestartKeepsNodes (306.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-901565 node delete m03: (2.055976749s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (172.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 stop
E1123 08:57:58.045603   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:58:07.479405   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-901565 stop: (2m52.449636431s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-901565 status: exit status 7 (59.15151ms)

                                                
                                                
-- stdout --
	multinode-901565
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-901565-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-901565 status --alsologtostderr: exit status 7 (58.584483ms)

                                                
                                                
-- stdout --
	multinode-901565
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-901565-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:58:33.347411   38821 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:58:33.347508   38821 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:58:33.347514   38821 out.go:374] Setting ErrFile to fd 2...
	I1123 08:58:33.347518   38821 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:58:33.347694   38821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-14048/.minikube/bin
	I1123 08:58:33.347834   38821 out.go:368] Setting JSON to false
	I1123 08:58:33.347859   38821 mustload.go:66] Loading cluster: multinode-901565
	I1123 08:58:33.347976   38821 notify.go:221] Checking for updates...
	I1123 08:58:33.348168   38821 config.go:182] Loaded profile config "multinode-901565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:33.348183   38821 status.go:174] checking status of multinode-901565 ...
	I1123 08:58:33.350353   38821 status.go:371] multinode-901565 host status = "Stopped" (err=<nil>)
	I1123 08:58:33.350372   38821 status.go:384] host is not running, skipping remaining checks
	I1123 08:58:33.350378   38821 status.go:176] multinode-901565 status: &{Name:multinode-901565 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:58:33.350398   38821 status.go:174] checking status of multinode-901565-m02 ...
	I1123 08:58:33.351801   38821 status.go:371] multinode-901565-m02 host status = "Stopped" (err=<nil>)
	I1123 08:58:33.351816   38821 status.go:384] host is not running, skipping remaining checks
	I1123 08:58:33.351822   38821 status.go:176] multinode-901565-m02 status: &{Name:multinode-901565-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (172.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-901565 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-901565 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m26.076558962s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901565 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.55s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-901565
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-901565-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-901565-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (77.2893ms)

                                                
                                                
-- stdout --
	* [multinode-901565-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-14048/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-14048/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-901565-m02' is duplicated with machine name 'multinode-901565-m02' in profile 'multinode-901565'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-901565-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-901565-m03 --driver=kvm2  --container-runtime=crio: (40.450403836s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-901565
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-901565: exit status 80 (209.227279ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-901565 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-901565-m03 already exists in multinode-901565-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-901565-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.47s)

                                                
                                    
x
+
TestScheduledStopUnix (108.36s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-429143 --memory=3072 --driver=kvm2  --container-runtime=crio
E1123 09:02:58.045532   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:03:07.471029   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-429143 --memory=3072 --driver=kvm2  --container-runtime=crio: (36.726833892s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-429143 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 09:03:24.072392   40910 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:03:24.072636   40910 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:03:24.072645   40910 out.go:374] Setting ErrFile to fd 2...
	I1123 09:03:24.072649   40910 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:03:24.072845   40910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-14048/.minikube/bin
	I1123 09:03:24.073061   40910 out.go:368] Setting JSON to false
	I1123 09:03:24.073140   40910 mustload.go:66] Loading cluster: scheduled-stop-429143
	I1123 09:03:24.073419   40910 config.go:182] Loaded profile config "scheduled-stop-429143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:03:24.073486   40910 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/scheduled-stop-429143/config.json ...
	I1123 09:03:24.073654   40910 mustload.go:66] Loading cluster: scheduled-stop-429143
	I1123 09:03:24.073743   40910 config.go:182] Loaded profile config "scheduled-stop-429143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-429143 -n scheduled-stop-429143
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-429143 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 09:03:24.368800   40956 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:03:24.368904   40956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:03:24.368916   40956 out.go:374] Setting ErrFile to fd 2...
	I1123 09:03:24.368924   40956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:03:24.369179   40956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-14048/.minikube/bin
	I1123 09:03:24.369402   40956 out.go:368] Setting JSON to false
	I1123 09:03:24.369623   40956 daemonize_unix.go:73] killing process 40945 as it is an old scheduled stop
	I1123 09:03:24.369731   40956 mustload.go:66] Loading cluster: scheduled-stop-429143
	I1123 09:03:24.370212   40956 config.go:182] Loaded profile config "scheduled-stop-429143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:03:24.370305   40956 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/scheduled-stop-429143/config.json ...
	I1123 09:03:24.370561   40956 mustload.go:66] Loading cluster: scheduled-stop-429143
	I1123 09:03:24.370710   40956 config.go:182] Loaded profile config "scheduled-stop-429143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1123 09:03:24.375648   18055 retry.go:31] will retry after 55.861µs: open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/scheduled-stop-429143/pid: no such file or directory
I1123 09:03:24.376791   18055 retry.go:31] will retry after 217.282µs: open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/scheduled-stop-429143/pid: no such file or directory
I1123 09:03:24.377908   18055 retry.go:31] will retry after 136.988µs: open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/scheduled-stop-429143/pid: no such file or directory
I1123 09:03:24.379030   18055 retry.go:31] will retry after 485.603µs: open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/scheduled-stop-429143/pid: no such file or directory
I1123 09:03:24.380153   18055 retry.go:31] will retry after 415.278µs: open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/scheduled-stop-429143/pid: no such file or directory
I1123 09:03:24.381267   18055 retry.go:31] will retry after 1.082666ms: open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/scheduled-stop-429143/pid: no such file or directory
I1123 09:03:24.382424   18055 retry.go:31] will retry after 877.176µs: open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/scheduled-stop-429143/pid: no such file or directory
I1123 09:03:24.383507   18055 retry.go:31] will retry after 2.121071ms: open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/scheduled-stop-429143/pid: no such file or directory
I1123 09:03:24.386672   18055 retry.go:31] will retry after 2.953796ms: open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/scheduled-stop-429143/pid: no such file or directory
I1123 09:03:24.389894   18055 retry.go:31] will retry after 4.073525ms: open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/scheduled-stop-429143/pid: no such file or directory
I1123 09:03:24.394026   18055 retry.go:31] will retry after 7.5805ms: open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/scheduled-stop-429143/pid: no such file or directory
I1123 09:03:24.402290   18055 retry.go:31] will retry after 5.699181ms: open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/scheduled-stop-429143/pid: no such file or directory
I1123 09:03:24.408553   18055 retry.go:31] will retry after 6.945911ms: open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/scheduled-stop-429143/pid: no such file or directory
I1123 09:03:24.415844   18055 retry.go:31] will retry after 22.336082ms: open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/scheduled-stop-429143/pid: no such file or directory
I1123 09:03:24.439099   18055 retry.go:31] will retry after 22.156014ms: open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/scheduled-stop-429143/pid: no such file or directory
I1123 09:03:24.462347   18055 retry.go:31] will retry after 60.765576ms: open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/scheduled-stop-429143/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-429143 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-429143 -n scheduled-stop-429143
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-429143
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-429143 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 09:03:50.083199   41108 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:03:50.083419   41108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:03:50.083427   41108 out.go:374] Setting ErrFile to fd 2...
	I1123 09:03:50.083431   41108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:03:50.083617   41108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-14048/.minikube/bin
	I1123 09:03:50.083828   41108 out.go:368] Setting JSON to false
	I1123 09:03:50.083902   41108 mustload.go:66] Loading cluster: scheduled-stop-429143
	I1123 09:03:50.084203   41108 config.go:182] Loaded profile config "scheduled-stop-429143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:03:50.084267   41108 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/scheduled-stop-429143/config.json ...
	I1123 09:03:50.084444   41108 mustload.go:66] Loading cluster: scheduled-stop-429143
	I1123 09:03:50.084547   41108 config.go:182] Loaded profile config "scheduled-stop-429143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-429143
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-429143: exit status 7 (59.500043ms)

                                                
                                                
-- stdout --
	scheduled-stop-429143
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-429143 -n scheduled-stop-429143
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-429143 -n scheduled-stop-429143: exit status 7 (58.917201ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-429143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-429143
--- PASS: TestScheduledStopUnix (108.36s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (115.9s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2503414174 start -p running-upgrade-985808 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2503414174 start -p running-upgrade-985808 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m2.626087481s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-985808 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-985808 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.86112738s)
helpers_test.go:175: Cleaning up "running-upgrade-985808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-985808
--- PASS: TestRunningBinaryUpgrade (115.90s)

                                                
                                    
x
+
TestKubernetesUpgrade (218.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-985491 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-985491 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.268913094s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-985491
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-985491: (2.000137215s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-985491 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-985491 status --format={{.Host}}: exit status 7 (85.468442ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-985491 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-985491 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.109332857s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-985491 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-985491 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-985491 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (75.111436ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-985491] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-14048/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-14048/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-985491
	    minikube start -p kubernetes-upgrade-985491 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9854912 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-985491 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-985491 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-985491 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m33.162084268s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-985491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-985491
--- PASS: TestKubernetesUpgrade (218.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-582859 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-582859 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (101.193459ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-582859] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-14048/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-14048/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (95.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-582859 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-582859 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m34.897775077s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-582859 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (95.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (141.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.4263510792 start -p stopped-upgrade-705472 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.4263510792 start -p stopped-upgrade-705472 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m17.270992525s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.4263510792 -p stopped-upgrade-705472 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.4263510792 -p stopped-upgrade-705472 stop: (1.67000068s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-705472 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-705472 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.292987939s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (141.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (48.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-582859 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-582859 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (48.138056898s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-582859 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-582859 status -o json: exit status 2 (192.707759ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-582859","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-582859
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (48.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (43.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-582859 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-582859 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (43.612877127s)
--- PASS: TestNoKubernetes/serial/Start (43.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-001636 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-001636 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (130.320798ms)

                                                
                                                
-- stdout --
	* [false-001636] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-14048/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-14048/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:07:25.910635   44350 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:07:25.910760   44350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:07:25.910771   44350 out.go:374] Setting ErrFile to fd 2...
	I1123 09:07:25.910777   44350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:07:25.911066   44350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-14048/.minikube/bin
	I1123 09:07:25.911720   44350 out.go:368] Setting JSON to false
	I1123 09:07:25.912787   44350 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6595,"bootTime":1763882251,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:07:25.912841   44350 start.go:143] virtualization: kvm guest
	I1123 09:07:25.917801   44350 out.go:179] * [false-001636] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:07:25.919073   44350 notify.go:221] Checking for updates...
	I1123 09:07:25.919088   44350 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:07:25.920281   44350 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:07:25.921379   44350 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-14048/kubeconfig
	I1123 09:07:25.922478   44350 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-14048/.minikube
	I1123 09:07:25.925642   44350 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:07:25.927018   44350 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:07:25.928484   44350 config.go:182] Loaded profile config "NoKubernetes-582859": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1123 09:07:25.928596   44350 config.go:182] Loaded profile config "kubernetes-upgrade-985491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:07:25.928678   44350 config.go:182] Loaded profile config "stopped-upgrade-705472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1123 09:07:25.928754   44350 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:07:25.961785   44350 out.go:179] * Using the kvm2 driver based on user configuration
	I1123 09:07:25.962942   44350 start.go:309] selected driver: kvm2
	I1123 09:07:25.962959   44350 start.go:927] validating driver "kvm2" against <nil>
	I1123 09:07:25.962974   44350 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:07:25.965058   44350 out.go:203] 
	W1123 09:07:25.966043   44350 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1123 09:07:25.967074   44350 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-001636 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-001636

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-001636

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-001636

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-001636

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-001636

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-001636

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-001636

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-001636

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-001636

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-001636

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-001636

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-001636" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-001636" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21969-14048/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 09:07:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.2:8443
name: kubernetes-upgrade-985491
contexts:
- context:
cluster: kubernetes-upgrade-985491
user: kubernetes-upgrade-985491
name: kubernetes-upgrade-985491
current-context: kubernetes-upgrade-985491
kind: Config
users:
- name: kubernetes-upgrade-985491
user:
client-certificate: /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/kubernetes-upgrade-985491/client.crt
client-key: /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/kubernetes-upgrade-985491/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-001636

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001636"

                                                
                                                
----------------------- debugLogs end: false-001636 [took: 5.965225208s] --------------------------------
helpers_test.go:175: Cleaning up "false-001636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-001636
--- PASS: TestNetworkPlugins/group/false (6.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21969-14048/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-582859 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-582859 "sudo systemctl is-active --quiet service kubelet": exit status 1 (166.053227ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (8.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (4.952457144s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E1123 09:07:50.547964   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.522089134s)
--- PASS: TestNoKubernetes/serial/ProfileList (8.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-582859
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-582859: (1.214395623s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (50.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-582859 --driver=kvm2  --container-runtime=crio
E1123 09:07:58.046405   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:08:07.470924   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-582859 --driver=kvm2  --container-runtime=crio: (50.217490246s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (50.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-705472
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-705472: (1.075015604s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                    
x
+
TestPause/serial/Start (76.18s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-471969 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-471969 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m16.177465243s)
--- PASS: TestPause/serial/Start (76.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-582859 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-582859 "sudo systemctl is-active --quiet service kubelet": exit status 1 (179.921ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestISOImage/Setup (27.59s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-959925 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-959925 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.589441446s)
--- PASS: TestISOImage/Setup (27.59s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (42.07s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-471969 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-471969 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.03328325s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (42.07s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-959925 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-959925 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-959925 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-959925 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-959925 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-959925 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-959925 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-959925 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-959925 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-959925 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-959925 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-001636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-001636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m28.235533779s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-001636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-001636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m20.609091465s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.61s)

                                                
                                    
x
+
TestPause/serial/Pause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-471969 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-471969 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-471969 --output=json --layout=cluster: exit status 2 (241.748158ms)

                                                
                                                
-- stdout --
	{"Name":"pause-471969","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-471969","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-471969 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.77s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.39s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-471969 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-471969 --alsologtostderr -v=5: (1.390961307s)
--- PASS: TestPause/serial/PauseAgain (1.39s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.84s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-471969 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.84s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (5.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (5.393232394s)
--- PASS: TestPause/serial/VerifyDeletedResources (5.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-001636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-001636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m12.030208404s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-xlgcl" [b5be018e-82b5-4800-9243-f42bfe891f59] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00428482s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-001636 "pgrep -a kubelet"
I1123 09:11:08.695267   18055 config.go:182] Loaded profile config "auto-001636": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-001636 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jhqpj" [61f862ab-bcf5-45cf-a1f5-d45b0609ea4c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jhqpj" [61f862ab-bcf5-45cf-a1f5-d45b0609ea4c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.006196707s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-001636 "pgrep -a kubelet"
I1123 09:11:09.199356   18055 config.go:182] Loaded profile config "kindnet-001636": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-001636 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bsn9h" [09cfa955-1758-4d8a-9097-b9769e99a720] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bsn9h" [09cfa955-1758-4d8a-9097-b9769e99a720] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004293856s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-001636 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-001636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-001636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-001636 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-001636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-001636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-001636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-001636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m13.773311688s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (103.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-001636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-001636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m43.058746325s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (103.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-95786" [8f339104-b249-49dc-a00f-88d7ec312208] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-95786" [8f339104-b249-49dc-a00f-88d7ec312208] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.152063927s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-001636 "pgrep -a kubelet"
I1123 09:11:43.561042   18055 config.go:182] Loaded profile config "calico-001636": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-001636 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nbjcw" [766c2511-9377-480d-88a6-a13fbb4f78c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nbjcw" [766c2511-9377-480d-88a6-a13fbb4f78c2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005620116s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-001636 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-001636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-001636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (80.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-001636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-001636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m20.871632006s)
--- PASS: TestNetworkPlugins/group/flannel/Start (80.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-001636 "pgrep -a kubelet"
I1123 09:12:49.197366   18055 config.go:182] Loaded profile config "custom-flannel-001636": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-001636 replace --force -f testdata/netcat-deployment.yaml
I1123 09:12:49.936406   18055 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1123 09:12:49.953126   18055 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dpcrt" [bf1ef6da-759f-4218-84f0-8d4cfde42d2c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dpcrt" [bf1ef6da-759f-4218-84f0-8d4cfde42d2c] Running
E1123 09:12:58.045642   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005132058s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-001636 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-001636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-001636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (57.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-001636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-001636 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (57.189549461s)
--- PASS: TestNetworkPlugins/group/bridge/Start (57.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-001636 "pgrep -a kubelet"
I1123 09:13:19.443691   18055 config.go:182] Loaded profile config "enable-default-cni-001636": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-001636 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6b9nz" [09d9271f-95b2-4db3-9dbc-b5cca0367bd1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6b9nz" [09d9271f-95b2-4db3-9dbc-b5cca0367bd1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003287447s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-001636 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-001636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-001636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-zgnmv" [de3aca73-7c56-452f-86d5-6d062fc66229] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005554036s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-001636 "pgrep -a kubelet"
I1123 09:13:38.551137   18055 config.go:182] Loaded profile config "flannel-001636": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-001636 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x65tl" [f993d17d-cfd2-40bb-ab7f-a24ca3b82e26] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-x65tl" [f993d17d-cfd2-40bb-ab7f-a24ca3b82e26] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.006830258s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (56.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-107629 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-107629 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (56.689292866s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (56.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-001636 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-001636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-001636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (102.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-088187 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-088187 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m42.377297752s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (102.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-001636 "pgrep -a kubelet"
I1123 09:14:12.821954   18055 config.go:182] Loaded profile config "bridge-001636": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-001636 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context bridge-001636 replace --force -f testdata/netcat-deployment.yaml: (1.564870411s)
I1123 09:14:14.914677   18055 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ln7xk" [ee10a60d-f1a5-4034-a232-eff693cd5c90] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ln7xk" [ee10a60d-f1a5-4034-a232-eff693cd5c90] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.006136165s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-001636 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-001636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-001636 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-535218 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-535218 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m22.704572845s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (82.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-107629 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2a556c0a-aff0-41f1-9788-5a6b2ede5ad6] Pending
helpers_test.go:352: "busybox" [2a556c0a-aff0-41f1-9788-5a6b2ede5ad6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2a556c0a-aff0-41f1-9788-5a6b2ede5ad6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004214011s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-107629 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-107629 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-107629 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.274195076s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-107629 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (90.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-107629 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-107629 --alsologtostderr -v=3: (1m30.495893409s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (90.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-088187 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3ee06668-6d0b-4749-8094-c56386163a7e] Pending
helpers_test.go:352: "busybox" [3ee06668-6d0b-4749-8094-c56386163a7e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3ee06668-6d0b-4749-8094-c56386163a7e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003411828s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-088187 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-088187 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-088187 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.016799298s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-088187 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-816263 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-816263 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m15.850053795s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (89.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-088187 --alsologtostderr -v=3
E1123 09:16:03.011971   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/kindnet-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:03.018378   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/kindnet-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:03.029791   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/kindnet-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:03.051221   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/kindnet-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:03.092666   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/kindnet-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:03.174569   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/kindnet-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:03.336243   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/kindnet-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:03.658362   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/kindnet-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:04.300306   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/kindnet-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-088187 --alsologtostderr -v=3: (1m29.123241861s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (89.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-535218 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [12c9b2bd-860f-4cdb-a4a5-e1a9cb195f42] Pending
E1123 09:16:05.581814   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/kindnet-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [12c9b2bd-860f-4cdb-a4a5-e1a9cb195f42] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1123 09:16:08.143784   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/kindnet-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [12c9b2bd-860f-4cdb-a4a5-e1a9cb195f42] Running
E1123 09:16:08.936702   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/auto-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:08.943120   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/auto-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:08.954642   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/auto-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:08.976065   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/auto-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:09.017519   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/auto-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:09.098992   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/auto-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:09.260681   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/auto-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:09.582553   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/auto-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:10.224021   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/auto-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:11.505379   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/auto-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:13.266117   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/kindnet-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004371633s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-535218 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-535218 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1123 09:16:14.066864   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/auto-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-535218 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (83.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-535218 --alsologtostderr -v=3
E1123 09:16:19.188193   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/auto-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:23.508209   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/kindnet-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-535218 --alsologtostderr -v=3: (1m23.474243875s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (83.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-107629 -n old-k8s-version-107629
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-107629 -n old-k8s-version-107629: exit status 7 (66.43068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-107629 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (43.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-107629 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1123 09:16:29.429489   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/auto-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:37.109756   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/calico-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:37.116157   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/calico-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:37.127561   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/calico-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:37.149016   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/calico-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:37.190504   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/calico-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:37.271973   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/calico-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:37.433299   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/calico-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:37.755236   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/calico-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:38.397004   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/calico-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:39.678985   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/calico-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:42.241095   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/calico-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:43.989889   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/kindnet-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:47.362901   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/calico-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:49.910757   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/auto-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:16:57.604996   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/calico-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-107629 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (43.542435132s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-107629 -n old-k8s-version-107629
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (43.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-lp22f" [aad940e9-426d-464c-9a41-fa3b9c413a07] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-lp22f" [aad940e9-426d-464c-9a41-fa3b9c413a07] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.003544114s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-816263 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1f41f343-2dd7-412f-a6a0-8b3f27d2666b] Pending
helpers_test.go:352: "busybox" [1f41f343-2dd7-412f-a6a0-8b3f27d2666b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1123 09:17:18.086275   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/calico-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [1f41f343-2dd7-412f-a6a0-8b3f27d2666b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004491044s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-816263 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-816263 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1123 09:17:24.951704   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/kindnet-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-816263 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-lp22f" [aad940e9-426d-464c-9a41-fa3b9c413a07] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005439187s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-107629 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (77.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-816263 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-816263 --alsologtostderr -v=3: (1m17.951599582s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (77.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-088187 -n no-preload-088187
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-088187 -n no-preload-088187: exit status 7 (59.005566ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-088187 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (59.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-088187 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-088187 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (59.399787797s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-088187 -n no-preload-088187
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (59.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-107629 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-107629 --alsologtostderr -v=1
E1123 09:17:30.872036   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/auto-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-107629 -n old-k8s-version-107629
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-107629 -n old-k8s-version-107629: exit status 2 (212.799648ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-107629 -n old-k8s-version-107629
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-107629 -n old-k8s-version-107629: exit status 2 (216.18513ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-107629 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-107629 -n old-k8s-version-107629
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-107629 -n old-k8s-version-107629
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (62.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-918672 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-918672 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m2.771966209s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (62.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-535218 -n embed-certs-535218
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-535218 -n embed-certs-535218: exit status 7 (86.275676ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-535218 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (77.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-535218 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 09:17:41.120735   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:17:49.912981   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/custom-flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:17:49.919357   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/custom-flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:17:49.930756   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/custom-flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:17:49.952202   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/custom-flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:17:49.993654   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/custom-flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:17:50.075108   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/custom-flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:17:50.236650   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/custom-flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:17:50.558125   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/custom-flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:17:51.199775   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/custom-flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:17:52.481060   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/custom-flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:17:55.043313   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/custom-flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:17:58.046167   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/functional-427957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:17:59.048240   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/calico-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:00.165200   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/custom-flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:07.470624   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:10.406647   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/custom-flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:19.690772   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/enable-default-cni-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:19.697277   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/enable-default-cni-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:19.708690   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/enable-default-cni-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:19.730169   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/enable-default-cni-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:19.771660   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/enable-default-cni-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:19.853226   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/enable-default-cni-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:20.015328   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/enable-default-cni-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:20.336889   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/enable-default-cni-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:20.978454   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/enable-default-cni-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:22.260092   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/enable-default-cni-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:24.822254   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/enable-default-cni-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-535218 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m17.523250266s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-535218 -n embed-certs-535218
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (77.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-r9r7k" [ee908c67-a578-4a72-ab34-1a4884131959] Running
E1123 09:18:29.944432   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/enable-default-cni-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:30.888417   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/custom-flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:32.325419   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:32.331921   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:32.343343   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:32.365339   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:32.406649   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:32.488815   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:32.650173   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:32.972122   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:33.614040   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:34.895947   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004517023s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-r9r7k" [ee908c67-a578-4a72-ab34-1a4884131959] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004960541s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-088187 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-918672 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1123 09:18:37.457539   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-918672 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.107789522s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (89.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-918672 --alsologtostderr -v=3
E1123 09:18:40.186447   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/enable-default-cni-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-918672 --alsologtostderr -v=3: (1m29.17217326s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (89.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-088187 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-088187 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-088187 -n no-preload-088187
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-088187 -n no-preload-088187: exit status 2 (218.566997ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-088187 -n no-preload-088187
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-088187 -n no-preload-088187: exit status 2 (232.659735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-088187 --alsologtostderr -v=1
E1123 09:18:42.579048   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-088187 -n no-preload-088187
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-088187 -n no-preload-088187
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-816263 -n default-k8s-diff-port-816263
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-816263 -n default-k8s-diff-port-816263: exit status 7 (72.129988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-816263 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-816263 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-816263 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (46.362157879s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-816263 -n default-k8s-diff-port-816263
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.61s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-959925 ssh "df -t ext4 /data | grep /data"
E1123 09:18:46.873779   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/kindnet-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//data (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-959925 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-959925 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-959925 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-959925 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-959925 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-959925 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.17s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.19s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-959925 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1763503576-21924
iso_test.go:118:   kicbase_version: v0.0.48-1761985721-21837
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: fae26615d717024600f131fc4fa68f9450a9ef29
--- PASS: TestISOImage/VersionJSON (0.19s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.2s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-959925 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.20s)
E1123 09:18:52.794062   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/auto-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:18:52.820548   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4sztz" [db8b64d1-60dc-435c-9490-a0a63b24bb8e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4sztz" [db8b64d1-60dc-435c-9490-a0a63b24bb8e] Running
E1123 09:19:00.668569   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/enable-default-cni-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.006079843s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4sztz" [db8b64d1-60dc-435c-9490-a0a63b24bb8e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004762244s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-535218 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-535218 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-535218 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-535218 -n embed-certs-535218
E1123 09:19:11.850298   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/custom-flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-535218 -n embed-certs-535218: exit status 2 (234.231666ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-535218 -n embed-certs-535218
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-535218 -n embed-certs-535218: exit status 2 (233.96877ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-535218 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-535218 -n embed-certs-535218
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-535218 -n embed-certs-535218
E1123 09:19:13.302912   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-956d9" [49286c2c-1124-4185-b3a6-0a5687d4bc1c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-956d9" [49286c2c-1124-4185-b3a6-0a5687d4bc1c] Running
E1123 09:19:34.885152   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/bridge-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.00420441s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-956d9" [49286c2c-1124-4185-b3a6-0a5687d4bc1c] Running
E1123 09:19:41.630414   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/enable-default-cni-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003893342s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-816263 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-816263 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-816263 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-816263 -n default-k8s-diff-port-816263
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-816263 -n default-k8s-diff-port-816263: exit status 2 (212.334064ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-816263 -n default-k8s-diff-port-816263
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-816263 -n default-k8s-diff-port-816263: exit status 2 (208.237983ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-816263 --alsologtostderr -v=1
E1123 09:19:44.044890   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/old-k8s-version-107629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:19:44.051305   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/old-k8s-version-107629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:19:44.062612   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/old-k8s-version-107629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:19:44.084036   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/old-k8s-version-107629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:19:44.125980   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/old-k8s-version-107629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:19:44.207828   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/old-k8s-version-107629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:19:44.370014   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/old-k8s-version-107629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-816263 -n default-k8s-diff-port-816263
E1123 09:19:44.691270   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/old-k8s-version-107629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-816263 -n default-k8s-diff-port-816263
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-918672 -n newest-cni-918672
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-918672 -n newest-cni-918672: exit status 7 (58.706641ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-918672 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-918672 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 09:20:25.022370   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/old-k8s-version-107629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:20:33.772305   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/custom-flannel-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:20:36.328594   18055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/bridge-001636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-918672 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (31.804569966s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-918672 -n newest-cni-918672
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-918672 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-918672 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-918672 -n newest-cni-918672
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-918672 -n newest-cni-918672: exit status 2 (262.942167ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-918672 -n newest-cni-918672
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-918672 -n newest-cni-918672: exit status 2 (275.954818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-918672 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-918672 -n newest-cni-918672
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-918672 -n newest-cni-918672
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.11s)

                                                
                                    

Test skip (40/351)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
130 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
134 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
263 TestNetworkPlugins/group/kubenet 4.1
271 TestNetworkPlugins/group/cilium 4.17
286 TestStartStop/group/disable-driver-mounts 0.2
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-964416 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-001636 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-001636

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-001636

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-001636

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-001636

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-001636

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-001636

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-001636

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-001636

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-001636

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-001636

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-001636

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-001636" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-001636" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-001636

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001636"

                                                
                                                
----------------------- debugLogs end: kubenet-001636 [took: 3.911748618s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-001636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-001636
--- SKIP: TestNetworkPlugins/group/kubenet (4.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-001636 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-001636

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-001636

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-001636

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-001636

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-001636

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-001636

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-001636

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-001636

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-001636

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-001636

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-001636

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-001636" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-001636

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-001636

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-001636

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-001636

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-001636" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-001636" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21969-14048/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 09:07:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.2:8443
name: kubernetes-upgrade-985491
contexts:
- context:
cluster: kubernetes-upgrade-985491
user: kubernetes-upgrade-985491
name: kubernetes-upgrade-985491
current-context: kubernetes-upgrade-985491
kind: Config
users:
- name: kubernetes-upgrade-985491
user:
client-certificate: /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/kubernetes-upgrade-985491/client.crt
client-key: /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/kubernetes-upgrade-985491/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-001636

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-001636" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001636"

                                                
                                                
----------------------- debugLogs end: cilium-001636 [took: 4.00273881s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-001636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-001636
--- SKIP: TestNetworkPlugins/group/cilium (4.17s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-317015" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-317015
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard